Utility-preserving anonymization for health data publishing.
Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn
2017-07-11
Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.
Key frame extraction based on spatiotemporal motion trajectory
NASA Astrophysics Data System (ADS)
Zhang, Yunzuo; Tao, Ran; Zhang, Feng
2015-05-01
Spatiotemporal motion trajectory can accurately reflect the changes of motion state. Motivated by this observation, this letter proposes a method for key frame extraction based on motion trajectory on the spatiotemporal slice. Different from the well-known motion related methods, the proposed method utilizes the inflexions of the motion trajectory on the spatiotemporal slice of all the moving objects. Experimental results show that although a similar performance is achieved in the single-objective screen, by comparing the proposed method to that achieved with the state-of-the-art methods based on motion energy or acceleration, the proposed method shows a better performance in a multiobjective video.
Xia, Youshen; Kamel, Mohamed S
2007-06-01
Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.
SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Elder, E; Roper, J
2015-06-15
Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less
Harmonic reduction of Direct Torque Control of six-phase induction motor.
Taheri, A
2016-07-01
In this paper, a new switching method in Direct Torque Control (DTC) of a six-phase induction machine for reduction of current harmonics is introduced. Selecting a suitable vector in each sampling period is an ordinal method in the ST-DTC drive of a six-phase induction machine. The six-phase induction machine has 64 voltage vectors and divided further into four groups. In the proposed DTC method, the suitable voltage vectors are selected from two vector groups. By a suitable selection of two vectors in each sampling period, the harmonic amplitude is decreased more, in and various comparison to that of the ST-DTC drive. The harmonics loss is greater reduced, while the electromechanical energy is decreased with switching loss showing a little increase. Spectrum analysis of the phase current in the standard and new switching table DTC of the six-phase induction machine and determination for the amplitude of each harmonics is proposed in this paper. The proposed method has a less sampling time in comparison to the ordinary method. The Harmonic analyses of the current in the low and high speed shows the performance of the presented method. The simplicity of the proposed method and its implementation without any extra hardware is other advantages of the proposed method. The simulation and experimental results show the preference of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Wu, C. F. Jeff
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.
Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui
2018-02-01
In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.
Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice
2014-05-07
Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Efficient calibration for imperfect computer models
Tuo, Rui; Wu, C. F. Jeff
2015-12-01
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
NASA Astrophysics Data System (ADS)
Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo
We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.
Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
Breast histopathology image segmentation using spatio-colour-texture based graph partition method.
Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N
2016-06-01
This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
A spatial scan statistic for multiple clusters.
Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie
2011-10-01
Spatial scan statistics are commonly used for geographical disease surveillance and cluster detection. While there are multiple clusters coexisting in the study area, they become difficult to detect because of clusters' shadowing effect to each other. The recently proposed sequential method showed its better power for detecting the second weaker cluster, but did not improve the ability of detecting the first stronger cluster which is more important than the second one. We propose a new extension of the spatial scan statistic which could be used to detect multiple clusters. Through constructing two or more clusters in the alternative hypothesis, our proposed method accounts for other coexisting clusters in the detecting and evaluating process. The performance of the proposed method is compared to the sequential method through an intensive simulation study, in which our proposed method shows better power in terms of both rejecting the null hypothesis and accurately detecting the coexisting clusters. In the real study of hand-foot-mouth disease data in Pingdu city, a true cluster town is successfully detected by our proposed method, which cannot be evaluated to be statistically significant by the standard method due to another cluster's shadowing effect. Copyright © 2011 Elsevier Inc. All rights reserved.
Design and control of the phase current of a brushless dc motor to eliminate cogging torque
NASA Astrophysics Data System (ADS)
Jang, G. H.; Lee, C. J.
2006-04-01
This paper presents a design and control method of the phase current to reduce the torque ripple of a brushless dc (BLDC) motor by eliminating cogging torque. The cogging torque is the main source of torque ripple and consequently of speed error, and it is also the excitation source to generate the vibration and noise of a motor. This research proposes a modified current wave form, which is composed of main and auxiliary currents. The former is the conventional current to generate the commutating torque. The latter generates the torque with the same magnitude and opposite sign of the corresponding cogging torque at the given position in order to eliminate the cogging torque. Time-stepping finite element method simulation considering pulse-width-modulation switching method has been performed to verify the effectiveness of the proposed method, and it shows that this proposed method reduces torque ripple by 36%. A digital-signal-processor-based controller is also developed to implement the proposed method, and it shows that this proposed method reduces the speed ripple significantly.
The intervals method: a new approach to analyse finite element outputs using multivariate statistics
De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep
2017-01-01
Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107
Iterative Nonlocal Total Variation Regularization Method for Image Restoration
Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen
2013-01-01
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560
Metal artifact reduction using a patch-based reconstruction for digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Borges, Lucas R.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2017-03-01
Digital breast tomosynthesis (DBT) is rapidly emerging as the main clinical tool for breast cancer screening. Although several reconstruction methods for DBT are described by the literature, one common issue is the interplane artifacts caused by out-of-focus features. For breasts containing highly attenuating features, such as surgical clips and large calcifications, the artifacts are even more apparent and can limit the detection and characterization of lesions by the radiologist. In this work, we propose a novel method of combining backprojected data into tomographic slices using a patch-based approach, commonly used in denoising. Preliminary tests were performed on a geometry phantom and on an anthropomorphic phantom containing metal inserts. The reconstructed images were compared to a commercial reconstruction solution. Qualitative assessment of the reconstructed images provides evidence that the proposed method reduces artifacts while maintaining low noise levels. Objective assessment supports the visual findings. The artifact spread function shows that the proposed method is capable of suppressing artifacts generated by highly attenuating features. The signal difference to noise ratio shows that the noise levels of the proposed and commercial methods are comparable, even though the commercial method applies post-processing filtering steps, which were not implemented on the proposed method. Thus, the proposed method can produce tomosynthesis reconstructions with reduced artifacts and low noise levels.
A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors
NASA Astrophysics Data System (ADS)
Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian
2017-09-01
A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.
Accelerating separable footprint (SF) forward and back projection on GPU
NASA Astrophysics Data System (ADS)
Xie, Xiaobin; McGaffin, Madison G.; Long, Yong; Fessler, Jeffrey A.; Wen, Minhua; Lin, James
2017-03-01
Statistical image reconstruction (SIR) methods for X-ray CT can improve image quality and reduce radiation dosages over conventional reconstruction methods, such as filtered back projection (FBP). However, SIR methods require much longer computation time. The separable footprint (SF) forward and back projection technique simplifies the calculation of intersecting volumes of image voxels and finite-size beams in a way that is both accurate and efficient for parallel implementation. We propose a new method to accelerate the SF forward and back projection on GPU with NVIDIA's CUDA environment. For the forward projection, we parallelize over all detector cells. For the back projection, we parallelize over all 3D image voxels. The simulation results show that the proposed method is faster than the acceleration method of the SF projectors proposed by Wu and Fessler.13 We further accelerate the proposed method using multiple GPUs. The results show that the computation time is reduced approximately proportional to the number of GPUs.
Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning.
Yang, Yimin; Wu, Q M Jonathan
2016-11-01
The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
NASA Astrophysics Data System (ADS)
Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-11-01
Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
Kawaguchi, Atsushi; Yamashita, Fumio
2017-10-01
This article proposes a procedure for describing the relationship between high-dimensional data sets, such as multimodal brain images and genetic data. We propose a supervised technique to incorporate the clinical outcome to determine a score, which is a linear combination of variables with hieratical structures to multimodalities. This approach is expected to obtain interpretable and predictive scores. The proposed method was applied to a study of Alzheimer's disease (AD). We propose a diagnostic method for AD that involves using whole-brain magnetic resonance imaging (MRI) and positron emission tomography (PET), and we select effective brain regions for the diagnostic probability and investigate the genome-wide association with the regions using single nucleotide polymorphisms (SNPs). The two-step dimension reduction method, which we previously introduced, was considered applicable to such a study and allows us to partially incorporate the proposed method. We show that the proposed method offers classification functions with feasibility and reasonable prediction accuracy based on the receiver operating characteristic (ROC) analysis and reasonable regions of the brain and genomes. Our simulation study based on the synthetic structured data set showed that the proposed method outperformed the original method and provided the characteristic for the supervised feature. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Automatic building extraction from LiDAR data fusion of point and grid-based features
NASA Astrophysics Data System (ADS)
Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang
2017-08-01
This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.
Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung
2010-06-01
Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.
A modified form of conjugate gradient method for unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa
2016-06-01
Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.
Local statistics adaptive entropy coding method for the improvement of H.26L VLC coding
NASA Astrophysics Data System (ADS)
Yoo, Kook-yeol; Kim, Jong D.; Choi, Byung-Sun; Lee, Yung Lyul
2000-05-01
In this paper, we propose an adaptive entropy coding method to improve the VLC coding efficiency of H.26L TML-1 codec. First of all, we will show that the VLC coding presented in TML-1 does not satisfy the sibling property of entropy coding. Then, we will modify the coding method into the local statistics adaptive one to satisfy the property. The proposed method based on the local symbol statistics dynamically changes the mapping relationship between symbol and bit pattern in the VLC table according to sibling property. Note that the codewords in the VLC table of TML-1 codec is not changed. Since this changed mapping relationship also derived in the decoder side by using the decoded symbols, the proposed VLC coding method does not require any overhead information. The simulation results show that the proposed method gives about 30% and 37% reduction in average bit rate for MB type and CBP information, respectively.
A robust two-way semi-linear model for normalization of cDNA microarray data
Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento
2005-01-01
Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789
An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.
Singh, Parth Raj; Wang, Yide; Chargé, Pascal
2017-03-30
In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.
Local Intrinsic Dimension Estimation by Generalized Linear Modeling.
Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru
2017-07-01
We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.
Deformable image registration for tissues with large displacements
Huang, Xishi; Ren, Jing; Green, Mark
2017-01-01
Abstract. Image registration for internal organs and soft tissues is considered extremely challenging due to organ shifts and tissue deformation caused by patients’ movements such as respiration and repositioning. In our previous work, we proposed a fast registration method for deformable tissues with small rotations. We extend our method to deformable registration of soft tissues with large displacements. We analyzed the deformation field of the liver by decomposing the deformation into shift, rotation, and pure deformation components and concluded that in many clinical cases, the liver deformation contains large rotations and small deformations. This analysis justified the use of linear elastic theory in our image registration method. We also proposed a region-based neuro-fuzzy transformation model to seamlessly stitch together local affine and local rigid models in different regions. We have performed the experiments on a liver MRI image set and showed the effectiveness of the proposed registration method. We have also compared the performance of the proposed method with the previous method on tissues with large rotations and showed that the proposed method outperformed the previous method when dealing with the combination of pure deformation and large rotations. Validation results show that we can achieve a target registration error of 1.87±0.87 mm and an average centerline distance error of 1.28±0.78 mm. The proposed technique has the potential to significantly improve registration capabilities and the quality of intraoperative image guidance. To the best of our knowledge, this is the first time that the complex displacement of the liver is explicitly separated into local pure deformation and rigid motion. PMID:28149924
Airplane detection in remote sensing images using convolutional neural networks
NASA Astrophysics Data System (ADS)
Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei
2018-03-01
Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.
A feasible DY conjugate gradient method for linear equality constraints
NASA Astrophysics Data System (ADS)
LI, Can
2017-09-01
In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.
NASA Astrophysics Data System (ADS)
Patra, Rusha; Dutta, Pranab K.
2015-07-01
Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10-3, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
Objectification of perceptual image quality for mobile video
NASA Astrophysics Data System (ADS)
Lee, Seon-Oh; Sim, Dong-Gyu
2011-06-01
This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.
Talker Localization Based on Interference between Transmitted and Reflected Audible Sound
NASA Astrophysics Data System (ADS)
Nakayama, Masato; Nakasako, Noboru; Shinohara, Toshihiro; Uebo, Tetsuji
In many engineering fields, distance to targets is very important. General distance measurement method uses a time delay between transmitted and reflected waves, but it is difficult to estimate the short distance. On the other hand, the method using phase interference to measure the short distance has been known in the field of microwave radar. Therefore, we have proposed the distance estimation method based on interference between transmitted and reflected audible sound, which can measure the distance between microphone and target with one microphone and one loudspeaker. In this paper, we propose talker localization method based on distance estimation using phase interference. We expand the distance estimation method using phase interference into two microphones (microphone array) in order to estimate talker position. The proposed method can estimate talker position by measuring the distance and direction between target and microphone array. In addition, talker's speech is regarded as a noise in the proposed method. Therefore, we also propose combination of the proposed method and CSP (Cross-power Spectrum Phase analysis) method which is one of the DOA (Direction Of Arrival) estimation methods. We evaluated the performance of talker localization in real environments. The experimental result shows the effectiveness of the proposed method.
Q-Method Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.
2012-01-01
A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.
Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory
Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong
2016-01-01
Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611
NASA Astrophysics Data System (ADS)
Ito, Shin-Ichi; Mitsukura, Yasue; Nakamura Miyamura, Hiroko; Saito, Takafumi; Fukumi, Minoru
EEG is characterized by the unique and individual characteristics. Little research has been done to take into account the individual characteristics when analyzing EEG signals. Often the EEG has frequency components which can describe most of the significant characteristics. Then there is the difference of importance between the analyzed frequency components of the EEG. We think that the importance difference shows the individual characteristics. In this paper, we propose a new EEG extraction method of characteristic vector by a latency structure model in individual characteristics (LSMIC). The LSMIC is the latency structure model, which has personal error as the individual characteristics, based on normal distribution. The real-coded genetic algorithms (RGA) are used for specifying the personal error that is unknown parameter. Moreover we propose an objective estimation method that plots the EEG characteristic vector on a visualization space. Finally, the performance of the proposed method is evaluated using a realistic simulation and applied to a real EEG data. The result of our experiment shows the effectiveness of the proposed method.
Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.
Ze Wang; Chi Man Wong; Feng Wan
2017-07-01
An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.
NASA Astrophysics Data System (ADS)
Al-Temeemy, Ali A.
2018-03-01
A descriptor is proposed for use in domiciliary healthcare monitoring systems. The descriptor is produced from chromatic methodology to extract robust features from the monitoring system's images. It has superior discrimination capabilities, is robust to events that normally disturb monitoring systems, and requires less computational time and storage space to achieve recognition. A method of human region segmentation is also used with this descriptor. The performance of the proposed descriptor was evaluated using experimental data sets, obtained through a series of experiments performed in the Centre for Intelligent Monitoring Systems, University of Liverpool. The evaluation results show high recognition performance for the proposed descriptor in comparison to traditional descriptors, such as moments invariant. The results also show the effectiveness of the proposed segmentation method regarding distortion effects associated with domiciliary healthcare systems.
A phase match based frequency estimation method for sinusoidal signals
NASA Astrophysics Data System (ADS)
Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao
2015-04-01
Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.
$n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.
Wu, Yue; Hua, Zhongyun; Zhou, Yicong
2016-11-01
Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.
Tchebichef moment based restoration of Gaussian blurred images.
Kumar, Ahlad; Paramesran, Raveendran; Lim, Chern-Loon; Dass, Sarat C
2016-11-10
With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.
Simple method for quick estimation of aquifer hydrogeological parameters
NASA Astrophysics Data System (ADS)
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
A Coarse-Alignment Method Based on the Optimal-REQUEST Algorithm
Zhu, Yongyun
2018-01-01
In this paper, we proposed a coarse-alignment method for strapdown inertial navigation systems based on attitude determination. The observation vectors, which can be obtained by inertial sensors, usually contain various types of noise, which affects the convergence rate and the accuracy of the coarse alignment. Given this drawback, we studied an attitude-determination method named optimal-REQUEST, which is an optimal method for attitude determination that is based on observation vectors. Compared to the traditional attitude-determination method, the filtering gain of the proposed method is tuned autonomously; thus, the convergence rate of the attitude determination is faster than in the traditional method. Within the proposed method, we developed an iterative method for determining the attitude quaternion. We carried out simulation and turntable tests, which we used to validate the proposed method’s performance. The experiment’s results showed that the convergence rate of the proposed optimal-REQUEST algorithm is faster and that the coarse alignment’s stability is higher. In summary, the proposed method has a high applicability to practical systems. PMID:29337895
NASA Astrophysics Data System (ADS)
Rodionov, A. A.; Turchin, V. I.
2017-06-01
We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.
Decision feedback equalizer for holographic data storage.
Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo
2018-05-20
Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.
Ramanujam, Nedunchelian; Kaliappan, Manivannan
2016-01-01
Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971
NASA Astrophysics Data System (ADS)
Lin, Hsien-I.; Nguyen, Xuan-Anh
2017-05-01
To operate a redundant manipulator to accomplish the end-effector trajectory planning and simultaneously control its gesture in online programming, incorporating the human motion is a useful and flexible option. This paper focuses on a manipulative instrument that can simultaneously control its arm gesture and end-effector trajectory via human teleoperation. The instrument can be classified by two parts; first, for the human motion capture and data processing, marker systems are proposed to capture human gesture. Second, the manipulator kinematics control is implemented by an augmented multi-tasking method, and forward and backward reaching inverse kinematics, respectively. Especially, the local-solution and divergence problems of a multi-tasking method are resolved by the proposed augmented multi-tasking method. Computer simulations and experiments with a 7-DOF (degree of freedom) redundant manipulator were used to validate the proposed method. Comparison among the single-tasking, original multi-tasking, and augmented multi-tasking algorithms were performed and the result showed that the proposed augmented method had a good end-effector position accuracy and the most similar gesture to the human gesture. Additionally, the experimental results showed that the proposed instrument was realized online.
Omisore, Olatunji Mumini; Han, Shipeng; Ren, Lingxue; Zhang, Nannan; Ivanov, Kamen; Elazab, Ahmed; Wang, Lei
2017-08-01
Snake-like robot is an emerging form of serial-link manipulator with the morphologic design of biological snakes. The redundant robot can be used to assist medical experts in accessing internal organs with minimal or no invasion. Several snake-like robotic designs have been proposed for minimal invasive surgery, however, the few that were developed are yet to be fully explored for clinical procedures. This is due to lack of capability for full-fledged spatial navigation. In rare cases where such snake-like designs are spatially flexible, there exists no inverse kinematics (IK) solution with both precise control and fast response. In this study, we proposed a non-iterative geometric method for solving IK of lead-module of a snake-like robot designed for therapy or ablation of abdominal tumors. The proposed method is aimed at providing accurate and fast IK solution for given target points in the robot's workspace. n-1 virtual points (VPs) were geometrically computed and set as coordinates of intermediary joints in an n-link module. Suitable joint angles that can place the end-effector at given target points were then computed by vectorizing coordinates of the VPs, in addition to coordinates of the base point, target point, and tip of the first link in its default pose. The proposed method is applied to solve IK of two-link and redundant four-link modules. Both two-link and four-link modules were simulated with Robotics Toolbox in Matlab 8.3 (R2014a). Implementation result shows that the proposed method can solve IK of the spatially flexible robot with minimal error values. Furthermore, analyses of results from both modules show that the geometric method can reach 99.21 and 88.61% of points in their workspaces, respectively, with an error threshold of 1 mm. The proposed method is non-iterative and has a maximum execution time of 0.009 s. This paper focuses on solving IK problem of a spatially flexible robot which is part of a developmental project for abdominal surgery through minimal invasion or natural orifices. The study showed that the proposed geometric method can resolve IK of the snake-like robot with negligible error offset. Evaluation against well-known methods shows that the proposed method can reach several points in the robot's workspace with high accuracy and shorter computational time, simultaneously.
Novel inter-crystal scattering event identification method for PET detectors
NASA Astrophysics Data System (ADS)
Lee, Min Sun; Kang, Seung Kwan; Lee, Jae Sung
2018-06-01
Here, we propose a novel method to identify inter-crystal scattering (ICS) events from a PET detector that is even applicable to light-sharing designs. In the proposed method, the detector observation was considered as a linear problem and ICS events were identified by solving this problem. Two ICS identification methods were suggested for solving the linear problem, pseudoinverse matrix calculation and convex constrained optimization. The proposed method was evaluated based on simulation and experimental studies. For the simulation study, an 8 × 8 photo sensor was coupled to 8 × 8, 10 × 10 and 12 × 12 crystal arrays to simulate a one-to-one coupling and two light-sharing detectors, respectively. The identification rate, the rate that the identified ICS events correctly include the true first interaction position and the energy linearity were evaluated for the proposed ICS identification methods. For the experimental study, a digital silicon photomultiplier was coupled with 8 × 8 and 10 × 10 arrays of 3 × 3 × 20 mm3 LGSO crystals to construct the one-to-one coupling and light-sharing detectors, respectively. Intrinsic spatial resolutions were measured for two detector types. The proposed ICS identification methods were implemented, and intrinsic resolutions were compared with and without ICS recovery. As a result, the simulation study showed that the proposed convex optimization method yielded robust energy estimation and high ICS identification rates of 0.93 and 0.87 for the one-to-one and light-sharing detectors, respectively. The experimental study showed a resolution improvement after recovering the identified ICS events into the first interaction position. The average intrinsic spatial resolutions for the one-to-one and light-sharing detector were 1.95 and 2.25 mm in the FWHM without ICS recovery, respectively. These values improved to 1.72 and 1.83 mm after ICS recovery, respectively. In conclusion, our proposed method showed good ICS identification in both one-to-one coupling and light-sharing detectors. We experimentally validated that the ICS recovery based on the proposed identification method led to an improved resolution.
NASA Astrophysics Data System (ADS)
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images
Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong
2016-01-01
A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles’ in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians. PMID:27548179
A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images.
Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong
2016-08-19
A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles' in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-02-18
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
NASA Astrophysics Data System (ADS)
Tajaddodianfar, Farid; Moheimani, S. O. Reza; Owen, James; Randall, John N.
2018-01-01
A common cause of tip-sample crashes in a Scanning Tunneling Microscope (STM) operating in constant current mode is the poor performance of its feedback control system. We show that there is a direct link between the Local Barrier Height (LBH) and robustness of the feedback control loop. A method known as the "gap modulation method" was proposed in the early STM studies for estimating the LBH. We show that the obtained measurements are affected by controller parameters and propose an alternative method which we prove to produce LBH measurements independent of the controller dynamics. We use the obtained LBH estimation to continuously update the gains of a STM proportional-integral (PI) controller and show that while tuning the PI gains, the closed-loop system tolerates larger variations of LBH without experiencing instability. We report experimental results, conducted on two STM scanners, to establish the efficiency of the proposed PI tuning approach. Improved feedback stability is believed to help in avoiding the tip/sample crash in STMs.
Jia, Yuanyuan; He, Zhongshi; Gholipour, Ali; Warfield, Simon K
2016-11-01
In magnetic resonance (MR), hardware limitation, scanning time, and patient comfort often result in the acquisition of anisotropic 3-D MR images. Enhancing image resolution is desired but has been very challenging in medical image processing. Super resolution reconstruction based on sparse representation and overcomplete dictionary has been lately employed to address this problem; however, these methods require extra training sets, which may not be always available. This paper proposes a novel single anisotropic 3-D MR image upsampling method via sparse representation and overcomplete dictionary that is trained from in-plane high resolution slices to upsample in the out-of-plane dimensions. The proposed method, therefore, does not require extra training sets. Abundant experiments, conducted on simulated and clinical brain MR images, show that the proposed method is more accurate than classical interpolation. When compared to a recent upsampling method based on the nonlocal means approach, the proposed method did not show improved results at low upsampling factors with simulated images, but generated comparable results with much better computational efficiency in clinical cases. Therefore, the proposed approach can be efficiently implemented and routinely used to upsample MR images in the out-of-planes views for radiologic assessment and postacquisition processing.
Wang, Shuang; Yue, Bo; Liang, Xuefeng; Jiao, Licheng
2018-03-01
Wisely utilizing the internal and external learning methods is a new challenge in super-resolution problem. To address this issue, we analyze the attributes of two methodologies and find two observations of their recovered details: 1) they are complementary in both feature space and image plane and 2) they distribute sparsely in the spatial space. These inspire us to propose a low-rank solution which effectively integrates two learning methods and then achieves a superior result. To fit this solution, the internal learning method and the external learning method are tailored to produce multiple preliminary results. Our theoretical analysis and experiment prove that the proposed low-rank solution does not require massive inputs to guarantee the performance, and thereby simplifying the design of two learning methods for the solution. Intensive experiments show the proposed solution improves the single learning method in both qualitative and quantitative assessments. Surprisingly, it shows more superior capability on noisy images and outperforms state-of-the-art methods.
Dynamic frame resizing with convolutional neural network for efficient video compression
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon
2017-09-01
In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.
Somasundaram, Karuppanagounder; Ezhilarasan, Kamalanathan
2015-01-01
To develop an automatic skull stripping method for magnetic resonance imaging (MRI) of human head scans. The proposed method is based on gray scale transformation and morphological operations. The proposed method has been tested with 20 volumes of normal T1-weighted images taken from Internet Brain Segmentation Repository. Experimental results show that the proposed method gives better results than the popular skull stripping methods Brain Extraction Tool and Brain Surface Extractor. The average value of Jaccard and Dice coefficients are 0.93 and 0.962 respectively. In this article, we have proposed a novel skull stripping method using intensity transformation and morphological operations. This is a low computational complexity method but gives competitive or better results than that of the popular skull stripping methods Brain Surface Extractor and Brain Extraction Tool.
Sidibé, Désiré; Sankar, Shrinivasan; Lemaître, Guillaume; Rastgoo, Mojdeh; Massich, Joan; Cheung, Carol Y; Tan, Gavin S W; Milea, Dan; Lamoureux, Ecosse; Wong, Tien Y; Mériaudeau, Fabrice
2017-02-01
This paper proposes a method for automatic classification of spectral domain OCT data for the identification of patients with retinal diseases such as Diabetic Macular Edema (DME). We address this issue as an anomaly detection problem and propose a method that not only allows the classification of the OCT volume, but also allows the identification of the individual diseased B-scans inside the volume. Our approach is based on modeling the appearance of normal OCT images with a Gaussian Mixture Model (GMM) and detecting abnormal OCT images as outliers. The classification of an OCT volume is based on the number of detected outliers. Experimental results with two different datasets show that the proposed method achieves a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, the experiments show that the proposed method achieves better classification performance than other recently published works. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
RRCRank: a fusion method using rank strategy for residue-residue contact prediction.
Jing, Xiaoyang; Dong, Qiwen; Lu, Ruqian
2017-09-02
In structural biology area, protein residue-residue contacts play a crucial role in protein structure prediction. Some researchers have found that the predicted residue-residue contacts could effectively constrain the conformational search space, which is significant for de novo protein structure prediction. In the last few decades, related researchers have developed various methods to predict residue-residue contacts, especially, significant performance has been achieved by using fusion methods in recent years. In this work, a novel fusion method based on rank strategy has been proposed to predict contacts. Unlike the traditional regression or classification strategies, the contact prediction task is regarded as a ranking task. First, two kinds of features are extracted from correlated mutations methods and ensemble machine-learning classifiers, and then the proposed method uses the learning-to-rank algorithm to predict contact probability of each residue pair. First, we perform two benchmark tests for the proposed fusion method (RRCRank) on CASP11 dataset and CASP12 dataset respectively. The test results show that the RRCRank method outperforms other well-developed methods, especially for medium and short range contacts. Second, in order to verify the superiority of ranking strategy, we predict contacts by using the traditional regression and classification strategies based on the same features as ranking strategy. Compared with these two traditional strategies, the proposed ranking strategy shows better performance for three contact types, in particular for long range contacts. Third, the proposed RRCRank has been compared with several state-of-the-art methods in CASP11 and CASP12. The results show that the RRCRank could achieve comparable prediction precisions and is better than three methods in most assessment metrics. The learning-to-rank algorithm is introduced to develop a novel rank-based method for the residue-residue contact prediction of proteins, which achieves state-of-the-art performance based on the extensive assessment.
NASA Astrophysics Data System (ADS)
Tamimi, E.; Ebadi, H.; Kiani, A.
2017-09-01
Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.
Yang, Jun; Zolotas, Argyrios; Chen, Wen-Hua; Michail, Konstantinos; Li, Shihua
2011-07-01
Robust control of a class of uncertain systems that have disturbances and uncertainties not satisfying "matching" condition is investigated in this paper via a disturbance observer based control (DOBC) approach. In the context of this paper, "matched" disturbances/uncertainties stand for the disturbances/uncertainties entering the system through the same channels as control inputs. By properly designing a disturbance compensation gain, a novel composite controller is proposed to counteract the "mismatched" lumped disturbances from the output channels. The proposed method significantly extends the applicability of the DOBC methods. Rigorous stability analysis of the closed-loop system with the proposed method is established under mild assumptions. The proposed method is applied to a nonlinear MAGnetic LEViation (MAGLEV) suspension system. Simulation shows that compared to the widely used integral control method, the proposed method provides significantly improved disturbance rejection and robustness against load variation. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Distributed Combinatorial Optimization Using Privacy on Mobile Phones
NASA Astrophysics Data System (ADS)
Ono, Satoshi; Katayama, Kimihiro; Nakayama, Shigeru
This paper proposes a method for distributed combinatorial optimization which uses mobile phones as computers. In the proposed method, an ordinary computer generates solution candidates and mobile phones evaluates them by referring privacy — private information and preferences. Users therefore does not have to send their privacy to any other computers and does not have to refrain from inputting their preferences. They therefore can obtain satisfactory solution. Experimental results have showed the proposed method solved room assignment problems without sending users' privacy to a server.
NASA Astrophysics Data System (ADS)
Li, Yong; Yang, Aiying; Guo, Peng; Qiao, Yaojun; Lu, Yueming
2018-01-01
We propose an accurate and nondata-aided chromatic dispersion (CD) estimation method involving the use of the cross-correlation function of two heterodyne detection signals for coherent optical communication systems. Simulations are implemented to verify the feasibility of the proposed method for 28-GBaud coherent systems with different modulation formats. The results show that the proposed method has high accuracy for measuring CD and has good robustness against laser phase noise, amplified spontaneous emission noise, and nonlinear impairments.
Monitoring biodiesel reactions of soybean oil and sunflower oil using ultrasonic parameters
NASA Astrophysics Data System (ADS)
Figueiredo, M. K. K.; Silva, C. E. R.; Alvarenga, A. V.; Costa-Félix, R. P. B.
2015-01-01
Biodiesel is an innovation that attempts to substitute diesel oil with biomass. The aim of this paper is to show the development of a real-time method to monitor transesterification reactions by using low-power ultrasound and pulse/echo techniques. The results showed that it is possible to identify different events during the transesterification process by using the proposed parameters, showing that the proposed method is a feasible way to monitor the reactions of biodiesel during its fabrication, in real time, and with relatively low- cost equipment.
Character Recognition Method by Time-Frequency Analyses Using Writing Pressure
NASA Astrophysics Data System (ADS)
Watanabe, Tatsuhito; Katsura, Seiichiro
With the development of information and communication technology, personal verification becomes more and more important. In the future ubiquitous society, the development of terminals handling personal information requires the personal verification technology. The signature is one of the personal verification methods; however, the number of characters is limited in the case of the signature and therefore false signature is used easily. Thus, personal identification is difficult from handwriting. This paper proposes a “haptic pen” that extracts the writing pressure, and shows a character recognition method by time-frequency analyses. Although the figures of characters written by different amanuenses are similar, the differences appear in the time-frequency domain. As a result, it is possible to use the proposed character recognition for personal identification more exactly. The experimental results showed the viability of the proposed method.
[Lateral chromatic aberrations correction for AOTF imaging spectrometer based on doublet prism].
Zhao, Hui-Jie; Zhou, Peng-Wei; Zhang, Ying; Li, Chong-Chong
2013-10-01
An user defined surface function method was proposed to model the acousto-optic interaction of AOTF based on wave-vector match principle. Assessment experiment result shows that this model can achieve accurate ray trace of AOTF diffracted beam. In addition, AOTF imaging spectrometer presents large residual lateral color when traditional chromatic aberrations correcting method is adopted. In order to reduce lateral chromatic aberrations, a method based on doublet prism is proposed. The optical material and angle of the prism are optimized automatically using global optimization with the help of user defined AOTF surface. Simulation result shows that the proposed method provides AOTF imaging spectrometer with great conveniences, which reduces the lateral chromatic aberration to less than 0.000 3 degrees and improves by one order of magnitude, with spectral image shift effectively corrected.
Automated Discrimination Method of Muscular and Subcutaneous Fat Layers Based on Tissue Elasticity
NASA Astrophysics Data System (ADS)
Inoue, Masahiro; Fukuda, Osamu; Tsubai, Masayoshi; Muraki, Satoshi; Okumura, Hiroshi; Arai, Kohei
Balance between human body composition, e.g. bones, muscles, and fat, is a major and basic indicator of personal health. Body composition analysis using ultrasound has been developed rapidly. However, interpretation of echo signal is conducted manually, and accuracy and confidence in interpretation requires experience. This paper proposes an automated discrimination method of tissue boundaries for measuring the thickness of subcutaneous fat and muscular layers. A portable one-dimensional ultrasound device was used in this study. The proposed method discriminated tissue boundaries based on tissue elasticity. Validity of the proposed method was evaluated in twenty-one subjects (twelve women, nine men; aged 20-70 yr) at three anatomical sites. Experimental results show that the proposed method can achieve considerably high discrimination performance.
Stripe nonuniformity correction for infrared imaging system based on single image optimization
NASA Astrophysics Data System (ADS)
Hua, Weiping; Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Ge, Peng; Zhang, Jiang; Xu, Zhihai
2018-06-01
Infrared imaging is often disturbed by stripe nonuniformity noise. Scene-based correction method can effectively reduce the impact of stripe noise. In this paper, a stripe nonuniformity correction method based on differential constraint is proposed. Firstly, the gray distribution of stripe nonuniformity is analyzed and the penalty function is constructed by the difference of horizontal gradient and vertical gradient. With the weight function, the penalty function is optimized to obtain the corrected image. Comparing with other single-frame approaches, experiments show that the proposed method performs better in both subjective and objective analysis, and does less damage to edge and detail. Meanwhile, the proposed method runs faster. We have also discussed the differences between the proposed idea and multi-frame methods. Our method is finally well applied in hardware system.
Hue-preserving and saturation-improved color histogram equalization algorithm.
Song, Ki Sun; Kang, Hee; Kang, Moon Gi
2016-06-01
In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.
NASA Astrophysics Data System (ADS)
Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin
2018-01-01
ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.
Contemporary computerized methods for logging planning
Chris B. LeDoux
1988-01-01
Contemporary harvest planning graphic software is highlighted with practical applications. Planning results from a production study of the Clearwater Cable Yarder are summarized. Application of the planning methods to evaluation of proposed silvicultural treatments is included. Results show that 3-dimensional graphic analysis of proposed harvesting or silvicultural...
Automatic Synthesis of Panoramic Radiographs from Dental Cone Beam Computed Tomography Data.
Luo, Ting; Shi, Changrong; Zhao, Xing; Zhao, Yunsong; Xu, Jinqiu
2016-01-01
In this paper, we propose an automatic method of synthesizing panoramic radiographs from dental cone beam computed tomography (CBCT) data for directly observing the whole dentition without the superimposition of other structures. This method consists of three major steps. First, the dental arch curve is generated from the maximum intensity projection (MIP) of 3D CBCT data. Then, based on this curve, the long axial curves of the upper and lower teeth are extracted to create a 3D panoramic curved surface describing the whole dentition. Finally, the panoramic radiograph is synthesized by developing this 3D surface. Both open-bite shaped and closed-bite shaped dental CBCT datasets were applied in this study, and the resulting images were analyzed to evaluate the effectiveness of this method. With the proposed method, a single-slice panoramic radiograph can clearly and completely show the whole dentition without the blur and superimposition of other dental structures. Moreover, thickened panoramic radiographs can also be synthesized with increased slice thickness to show more features, such as the mandibular nerve canal. One feature of the proposed method is that it is automatically performed without human intervention. Another feature of the proposed method is that it requires thinner panoramic radiographs to show the whole dentition than those produced by other existing methods, which contributes to the clarity of the anatomical structures, including the enamel, dentine and pulp. In addition, this method can rapidly process common dental CBCT data. The speed and image quality of this method make it an attractive option for observing the whole dentition in a clinical setting.
Remote logo detection using angle-distance histograms
NASA Astrophysics Data System (ADS)
Youn, Sungwook; Ok, Jiheon; Baek, Sangwook; Woo, Seongyoun; Lee, Chulhee
2016-05-01
Among all the various computer vision applications, automatic logo recognition has drawn great interest from industry as well as various academic institutions. In this paper, we propose an angle-distance map, which we used to develop a robust logo detection algorithm. The proposed angle-distance histogram is invariant against scale and rotation. The proposed method first used shape information and color characteristics to find the candidate regions and then applied the angle-distance histogram. Experiments show that the proposed method detected logos of various sizes and orientations.
Denoising Medical Images using Calculus of Variations
Kohan, Mahdi Nakhaie; Behnam, Hamid
2011-01-01
We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674
Shrestha, Manoj; Hok, Pavel; Nöth, Ulrike; Lienerth, Bianca; Deichmann, Ralf
2018-03-30
The purpose of this work was to optimize the acquisition of diffusion-weighted (DW) single-refocused spin-echo (srSE) data without intrinsic eddy-current compensation (ECC) for an improved performance of ECC postprocessing. The rationale is that srSE sequences without ECC may yield shorter echo times (TE) and thus higher signal-to-noise ratios (SNR) than srSE or twice-refocused spin-echo (trSE) schemes with intrinsic ECC. The proposed method employs dummy scans with DW gradients to drive eddy currents into a steady state before data acquisition. Parameters of the ECC postprocessing algorithm were also optimized. Simulations were performed to obtain minimum TE values for the proposed sequence and sequences with intrinsic ECC. Experimentally, the proposed method was compared with standard DW-trSE imaging, both in vitro and in vivo. Simulations showed substantially shorter TE for the proposed method than for methods with intrinsic ECC when using shortened echo readouts. Data of the proposed method showed a marked increase in SNR. A dummy scan duration of at least 1.5 s improved performance of the ECC postprocessing algorithm. Changes proposed for the DW-srSE sequence and for the parameter setting of the postprocessing ECC algorithm considerably reduced eddy-current artifacts and provided a higher SNR.
On Short-Time Estimation of Vocal Tract Length from Formant Frequencies
Lammert, Adam C.; Narayanan, Shrikanth S.
2015-01-01
Vocal tract length is highly variable across speakers and determines many aspects of the acoustic speech signal, making it an essential parameter to consider for explaining behavioral variability. A method for accurate estimation of vocal tract length from formant frequencies would afford normalization of interspeaker variability and facilitate acoustic comparisons across speakers. A framework for considering estimation methods is developed from the basic principles of vocal tract acoustics, and an estimation method is proposed that follows naturally from this framework. The proposed method is evaluated using acoustic characteristics of simulated vocal tracts ranging from 14 to 19 cm in length, as well as real-time magnetic resonance imaging data with synchronous audio from five speakers whose vocal tracts range from 14.5 to 18.0 cm in length. Evaluations show improvements in accuracy over previously proposed methods, with 0.631 and 1.277 cm root mean square error on simulated and human speech data, respectively. Empirical results show that the effectiveness of the proposed method is based on emphasizing higher formant frequencies, which seem less affected by speech articulation. Theoretical predictions of formant sensitivity reinforce this empirical finding. Moreover, theoretical insights are explained regarding the reason for differences in formant sensitivity. PMID:26177102
A New Moving Object Detection Method Based on Frame-difference and Background Subtraction
NASA Astrophysics Data System (ADS)
Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong
2017-09-01
Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.
Computation-aware algorithm selection approach for interlaced-to-progressive conversion
NASA Astrophysics Data System (ADS)
Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang
2010-05-01
We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.
A hierarchical classification method for finger knuckle print recognition
NASA Astrophysics Data System (ADS)
Kong, Tao; Yang, Gongping; Yang, Lu
2014-12-01
Finger knuckle print has recently been seen as an effective biometric technique. In this paper, we propose a hierarchical classification method for finger knuckle print recognition, which is rooted in traditional score-level fusion methods. In the proposed method, we firstly take Gabor feature as the basic feature for finger knuckle print recognition and then a new decision rule is defined based on the predefined threshold. Finally, the minor feature speeded-up robust feature is conducted for these users, who cannot be recognized by the basic feature. Extensive experiments are performed to evaluate the proposed method, and experimental results show that it can achieve a promising performance.
Color dithering methods for LEGO-like 3D printing
NASA Astrophysics Data System (ADS)
Sun, Pei-Li; Sie, Yuping
2015-01-01
Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However, RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.
A Novel Bit-level Image Encryption Method Based on Chaotic Map and Dynamic Grouping
NASA Astrophysics Data System (ADS)
Zhang, Guo-Ji; Shen, Yan
2012-10-01
In this paper, a novel bit-level image encryption method based on dynamic grouping is proposed. In the proposed method, the plain-image is divided into several groups randomly, then permutation-diffusion process on bit level is carried out. The keystream generated by logistic map is related to the plain-image, which confuses the relationship between the plain-image and the cipher-image. The computer simulation results of statistical analysis, information entropy analysis and sensitivity analysis show that the proposed encryption method is secure and reliable enough to be used for communication application.
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data
Jing, Linhai; Tang, Yunwei; Ding, Haifeng
2018-01-01
Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods. PMID:29439502
An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data.
Li, Hui; Jing, Linhai; Tang, Yunwei; Ding, Haifeng
2018-02-11
Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods.
Small-Scale System for Evaluation of Stretch-Flangeability with Excellent Reliability
NASA Astrophysics Data System (ADS)
Yoon, Jae Ik; Jung, Jaimyun; Lee, Hak Hyeon; Kim, Hyoung Seop
2018-02-01
We propose a system for evaluating the stretch-flangeability of small-scale specimens based on the hole-expansion ratio (HER). The system has no size effect and shows excellent reproducibility, reliability, and economic efficiency. To verify the reliability and reproducibility of the proposed hole-expansion testing (HET) method, the deformation behavior of the conventional standard stretch-flangeability evaluation method was compared with the proposed method using finite-element method simulations. The distribution of shearing defects in the hole-edge region of the specimen, which has a significant influence on the HER, was investigated using scanning electron microscopy. The stretch-flangeability of several kinds of advanced high-strength steel determined using the conventional standard method was compared with that using the proposed small-scale HET method. It was verified that the deformation behavior, morphology and distribution of shearing defects, and stretch-flangeability results for the specimens were the same for the conventional standard method and the proposed small-scale stretch-flangeability evaluation system.
Small-Scale System for Evaluation of Stretch-Flangeability with Excellent Reliability
NASA Astrophysics Data System (ADS)
Yoon, Jae Ik; Jung, Jaimyun; Lee, Hak Hyeon; Kim, Hyoung Seop
2018-06-01
We propose a system for evaluating the stretch-flangeability of small-scale specimens based on the hole-expansion ratio (HER). The system has no size effect and shows excellent reproducibility, reliability, and economic efficiency. To verify the reliability and reproducibility of the proposed hole-expansion testing (HET) method, the deformation behavior of the conventional standard stretch-flangeability evaluation method was compared with the proposed method using finite-element method simulations. The distribution of shearing defects in the hole-edge region of the specimen, which has a significant influence on the HER, was investigated using scanning electron microscopy. The stretch-flangeability of several kinds of advanced high-strength steel determined using the conventional standard method was compared with that using the proposed small-scale HET method. It was verified that the deformation behavior, morphology and distribution of shearing defects, and stretch-flangeability results for the specimens were the same for the conventional standard method and the proposed small-scale stretch-flangeability evaluation system.
An affordable and easy-to-use diagnostic method for keratoconus detection using a smartphone
NASA Astrophysics Data System (ADS)
Askarian, Behnam; Tabei, Fatemehsadat; Askarian, Amin; Chong, Jo Woon
2018-02-01
Recently, smartphones are used for disease diagnosis and healthcare. In this paper, we propose a novel affordable diagnostic method of detecting keratoconus using a smartphone. Keratoconus is usually detected in clinics with ophthalmic devices, which are large, expensive and not portable, and need to be operated by trained technicians. However, our proposed smartphone-based eye disease detection method is small, affordable, portable, and it can be operated by patients in a convenient way. The results show that the proposed keratoconus detection method detects severe, advanced, and moderate keratoconus with accuracies of 93%, 86%, 67%, respectively. Due to its convenience with these accuracies, the proposed keratoconus detection method is expected to be applied in detecting keratoconus at an earlier stage in an affordable way.
NASA Astrophysics Data System (ADS)
Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na
2016-10-01
Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
NASA Astrophysics Data System (ADS)
Song, Wanjun; Zhang, Hou
2017-11-01
Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.
Comparative study of methods for recognition of an unknown person's action from a video sequence
NASA Astrophysics Data System (ADS)
Hori, Takayuki; Ohya, Jun; Kurumisawa, Jun
2009-02-01
This paper proposes a Tensor Decomposition Based method that can recognize an unknown person's action from a video sequence, where the unknown person is not included in the database (tensor) used for the recognition. The tensor consists of persons, actions and time-series image features. For the observed unknown person's action, one of the actions stored in the tensor is assumed. Using the motion signature obtained from the assumption, the unknown person's actions are synthesized. The actions of one of the persons in the tensor are replaced by the synthesized actions. Then, the core tensor for the replaced tensor is computed. This process is repeated for the actions and persons. For each iteration, the difference between the replaced and original core tensors is computed. The assumption that gives the minimal difference is the action recognition result. For the time-series image features to be stored in the tensor and to be extracted from the observed video sequence, the human body silhouette's contour shape based feature is used. To show the validity of our proposed method, our proposed method is experimentally compared with Nearest Neighbor rule and Principal Component analysis based method. Experiments using 33 persons' seven kinds of action show that our proposed method achieves better recognition accuracies for the seven actions than the other methods.
Finding Dantzig Selectors with a Proximity Operator based Fixed-point Algorithm
2014-11-01
experiments showed that this method usually outperforms the method in [2] in terms of CPU time while producing solutions of comparable quality. The... method proposed in [19]. To alleviate the difficulty caused by the subprob- lem without a closed form solution , a linearized ADM was proposed for the...a closed form solution , but the β-related subproblem does not and is solved approximately by using the nonmonotone gradient method in [18]. The
NASA Astrophysics Data System (ADS)
Kwon, Ki-Won; Cho, Yongsoo
This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.
Competitive region orientation code for palmprint verification and identification
NASA Astrophysics Data System (ADS)
Tang, Wenliang
2015-11-01
Orientation features of the palmprint have been widely investigated in coding-based palmprint-recognition methods. Conventional orientation-based coding methods usually used discrete filters to extract the orientation feature of palmprint. However, in real operations, the orientations of the filter usually are not consistent with the lines of the palmprint. We thus propose a competitive region orientation-based coding method. Furthermore, an effective weighted balance scheme is proposed to improve the accuracy of the extracted region orientation. Compared with conventional methods, the region orientation of the palmprint extracted using the proposed method can precisely and robustly describe the orientation feature of the palmprint. Extensive experiments on the baseline PolyU and multispectral palmprint databases are performed and the results show that the proposed method achieves a promising performance in comparison to conventional state-of-the-art orientation-based coding methods in both palmprint verification and identification.
Incoherent beam combining based on the momentum SPGD algorithm
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng
2018-05-01
Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.
Network immunization under limited budget using graph spectra
NASA Astrophysics Data System (ADS)
Zahedi, R.; Khansari, M.
2016-03-01
In this paper, we propose a new algorithm that minimizes the worst expected growth of an epidemic by reducing the size of the largest connected component (LCC) of the underlying contact network. The proposed algorithm is applicable to any level of available resources and, despite the greedy approaches of most immunization strategies, selects nodes simultaneously. In each iteration, the proposed method partitions the LCC into two groups. These are the best candidates for communities in that component, and the available resources are sufficient to separate them. Using Laplacian spectral partitioning, the proposed method performs community detection inference with a time complexity that rivals that of the best previous methods. Experiments show that our method outperforms targeted immunization approaches in both real and synthetic networks.
Fuzzy rule-based image segmentation in dynamic MR images of the liver
NASA Astrophysics Data System (ADS)
Kobashi, Syoji; Hata, Yutaka; Tokimoto, Yasuhiro; Ishikawa, Makato
2000-06-01
This paper presents a fuzzy rule-based region growing method for segmenting two-dimensional (2-D) and three-dimensional (3- D) magnetic resonance (MR) images. The method is an extension of the conventional region growing method. The proposed method evaluates the growing criteria by using fuzzy inference techniques. The use of the fuzzy if-then rules is appropriate for describing the knowledge of the legions on the MR images. To evaluate the performance of the proposed method, it was applied to artificially generated images. In comparison with the conventional method, the proposed method shows high robustness for noisy images. The method then applied for segmenting the dynamic MR images of the liver. The dynamic MR imaging has been used for diagnosis of hepatocellular carcinoma (HCC), portal hypertension, and so on. Segmenting the liver, portal vein (PV), and inferior vena cava (IVC) can give useful description for the diagnosis, and is a basis work of a pres-surgery planning system and a virtual endoscope. To apply the proposed method, fuzzy if-then rules are derived from the time-density curve of ROIs. In the experimental results, the 2-D reconstructed and 3-D rendered images of the segmented liver, PV, and IVC are shown. The evaluation by a physician shows that the generated images are comparable to the hepatic anatomy, and they would be useful to understanding, diagnosis, and pre-surgery planning.
A Novel Quasi-3D Method for Cascade Flow Considering Axial Velocity Density Ratio
NASA Astrophysics Data System (ADS)
Chen, Zhiqiang; Zhou, Ming; Xu, Quanyong; Huang, Xudong
2018-03-01
A novel quasi-3D Computational Fluid Dynamics (CFD) method of mid-span flow simulation for compressor cascades is proposed. Two dimension (2D) Reynolds-Averaged Navier-Stokes (RANS) method is shown facing challenge in predicting mid-span flow with a unity Axial Velocity Density Ratio (AVDR). Three dimension (3D) RANS solution also shows distinct discrepancies if the AVDR is not predicted correctly. In this paper, 2D and 3D CFD results discrepancies are analyzed and a novel quasi-3D CFD method is proposed. The new quasi-3D model is derived by reducing 3D RANS Finite Volume Method (FVM) discretization over a one-spanwise-layer structured mesh cell. The sidewall effect is considered by two parts. The first part is explicit interface fluxes of mass, momentum and energy as well as turbulence. The second part is a cell boundary scaling factor representing sidewall boundary layer contraction. The performance of the novel quasi-3D method is validated on mid-span pressure distribution, pressure loss and shock prediction of two typical cascades. The results show good agreement with the experiment data on cascade SJ301-20 and cascade AC6-10 at all test condition. The proposed quasi-3D method shows superior accuracy over traditional 2D RANS method and 3D RANS method in performance prediction of compressor cascade.
Human Body 3D Posture Estimation Using Significant Points and Two Cameras
Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin
2014-01-01
This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422
Interval type-2 fuzzy PID controller for uncertain nonlinear inverted pendulum system.
El-Bardini, Mohammad; El-Nagar, Ahmad M
2014-05-01
In this paper, the interval type-2 fuzzy proportional-integral-derivative controller (IT2F-PID) is proposed for controlling an inverted pendulum on a cart system with an uncertain model. The proposed controller is designed using a new method of type-reduction that we have proposed, which is called the simplified type-reduction method. The proposed IT2F-PID controller is able to handle the effect of structure uncertainties due to the structure of the interval type-2 fuzzy logic system (IT2-FLS). The results of the proposed IT2F-PID controller using a new method of type-reduction are compared with the other proposed IT2F-PID controller using the uncertainty bound method and the type-1 fuzzy PID controller (T1F-PID). The simulation and practical results show that the performance of the proposed controller is significantly improved compared with the T1F-PID controller. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalonde, A; Bouchard, H
Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to themore » technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.« less
Robust gene selection methods using weighting schemes for microarray data analysis.
Kang, Suyeon; Song, Jongwoo
2017-09-02
A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.
Object-Based Dense Matching Method for Maintaining Structure Characteristics of Linear Buildings
Yan, Yiming; Qiu, Mingjie; Zhao, Chunhui; Wang, Liguo
2018-01-01
In this paper, we proposed a novel object-based dense matching method specially for the high-precision disparity map of building objects in urban areas, which can maintain accurate object structure characteristics. The proposed framework mainly includes three stages. Firstly, an improved edge line extraction method is proposed for the edge segments to fit closely to building outlines. Secondly, a fusion method is proposed for the outlines under the constraint of straight lines, which can maintain the building structural attribute with parallel or vertical edges, which is very useful for the dense matching method. Finally, we proposed an edge constraint and outline compensation (ECAOC) dense matching method to maintain building object structural characteristics in the disparity map. In the proposed method, the improved edge lines are used to optimize matching search scope and matching template window, and the high-precision building outlines are used to compensate the shape feature of building objects. Our method can greatly increase the matching accuracy of building objects in urban areas, especially at building edges. For the outline extraction experiments, our fusion method verifies the superiority and robustness on panchromatic images of different satellites and different resolutions. For the dense matching experiments, our ECOAC method shows great advantages for matching accuracy of building objects in urban areas compared with three other methods. PMID:29596393
Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.
Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping
2017-06-27
Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.
Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number
NASA Astrophysics Data System (ADS)
Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo
Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.
Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system
NASA Astrophysics Data System (ADS)
Bai, Jianbo; Li, Yang; Chen, Jianhao
2018-02-01
The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.
A flower image retrieval method based on ROI feature.
Hong, An-Xiang; Chen, Gang; Li, Jun-Li; Chi, Zhe-Ru; Zhang, Dan
2004-07-01
Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).
Designing Hyperchaotic Cat Maps With Any Desired Number of Positive Lyapunov Exponents.
Hua, Zhongyun; Yi, Shuang; Zhou, Yicong; Li, Chengqing; Wu, Yue
2018-02-01
Generating chaotic maps with expected dynamics of users is a challenging topic. Utilizing the inherent relation between the Lyapunov exponents (LEs) of the Cat map and its associated Cat matrix, this paper proposes a simple but efficient method to construct an -dimensional ( -D) hyperchaotic Cat map (HCM) with any desired number of positive LEs. The method first generates two basic -D Cat matrices iteratively and then constructs the final -D Cat matrix by performing similarity transformation on one basic -D Cat matrix by the other. Given any number of positive LEs, it can generate an -D HCM with desired hyperchaotic complexity. Two illustrative examples of -D HCMs were constructed to show the effectiveness of the proposed method, and to verify the inherent relation between the LEs and Cat matrix. Theoretical analysis proves that the parameter space of the generated HCM is very large. Performance evaluations show that, compared with existing methods, the proposed method can construct -D HCMs with lower computation complexity and their outputs demonstrate strong randomness and complex ergodicity.
NASA Astrophysics Data System (ADS)
Wang, Pan; Zhang, Yi; Yan, Dong
2018-05-01
Ant Colony Algorithm (ACA) is a powerful and effective algorithm for solving the combination optimization problem. Moreover, it was successfully used in traveling salesman problem (TSP). But it is easy to prematurely converge to the non-global optimal solution and the calculation time is too long. To overcome those shortcomings, a new method is presented-An improved self-adaptive Ant Colony Algorithm based on genetic strategy. The proposed method adopts adaptive strategy to adjust the parameters dynamically. And new crossover operation and inversion operation in genetic strategy was used in this method. We also make an experiment using the well-known data in TSPLIB. The experiment results show that the performance of the proposed method is better than the basic Ant Colony Algorithm and some improved ACA in both the result and the convergence time. The numerical results obtained also show that the proposed optimization method can achieve results close to the theoretical best known solutions at present.
Gönen, Mehmet
2014-01-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862
Gönen, Mehmet
2014-03-01
Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.
Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S
2015-02-09
A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.
The Development of a Noncontact Letter Input Interface “Fingual” Using Magnetic Dataset
NASA Astrophysics Data System (ADS)
Fukushima, Taishi; Miyazaki, Fumio; Nishikawa, Atsushi
We have newly developed a noncontact letter input interface called “Fingual”. Fingual uses a glove mounted with inexpensive and small magnetic sensors. Using the glove, users can input letters to form the finger alphabets, a kind of sign language. The proposed method uses some dataset which consists of magnetic field and the corresponding letter information. In this paper, we show two recognition methods using the dataset. First method uses Euclidean norm, and second one additionally uses Gaussian function as a weighting function. Then we conducted verification experiments for the recognition rate of each method in two situations. One of the situations is that subjects used their own dataset; the other is that they used another person's dataset. As a result, the proposed method could recognize letters with a high rate in both situations, even though it is better to use their own dataset than to use another person's dataset. Though Fingual needs to collect magnetic dataset for each letter in advance, its feature is the ability to recognize letters without the complicated calculations such as inverse problems. This paper shows results of the recognition experiments, and shows the utility of the proposed system “Fingual”.
Construction of Pancreatic Cancer Classifier Based on SVM Optimized by Improved FOA
Ma, Xiaoqi
2015-01-01
A novel method is proposed to establish the pancreatic cancer classifier. Firstly, the concept of quantum and fruit fly optimal algorithm (FOA) are introduced, respectively. Then FOA is improved by quantum coding and quantum operation, and a new smell concentration determination function is defined. Finally, the improved FOA is used to optimize the parameters of support vector machine (SVM) and the classifier is established by optimized SVM. In order to verify the effectiveness of the proposed method, SVM and other classification methods have been chosen as the comparing methods. The experimental results show that the proposed method can improve the classifier performance and cost less time. PMID:26543867
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.
2015-05-01
This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.
Robust and fast-converging level set method for side-scan sonar image segmentation
NASA Astrophysics Data System (ADS)
Liu, Yan; Li, Qingwu; Huo, Guanying
2017-11-01
A robust and fast-converging level set method is proposed for side-scan sonar (SSS) image segmentation. First, the noise in each sonar image is removed using the adaptive nonlinear complex diffusion filter. Second, k-means clustering is used to obtain the initial presegmentation image from the denoised image, and then the distance maps of the initial contours are reinitialized to guarantee the accuracy of the numerical calculation used in the level set evolution. Finally, the satisfactory segmentation is achieved using a robust variational level set model, where the evolution control parameters are generated by the presegmentation. The proposed method is successfully applied to both synthetic image with speckle noise and real SSS images. Experimental results show that the proposed method needs much less iteration and therefore is much faster than the fuzzy local information c-means clustering method, the level set method using a gamma observation model, and the enhanced region-scalable fitting method. Moreover, the proposed method can usually obtain more accurate segmentation results compared with other methods.
Making chaotic behavior in a damped linear harmonic oscillator
NASA Astrophysics Data System (ADS)
Konishi, Keiji
2001-06-01
The present Letter proposes a simple control method which makes chaotic behavior in a damped linear harmonic oscillator. This method is a modified scheme proposed in paper by Wang and Chen (IEEE CAS-I 47 (2000) 410) which presents an anti-control method for making chaotic behavior in discrete-time linear systems. We provide a systematic procedure to design parameters and sampling period of a feedback controller. Furthermore, we show that our method works well on numerical simulations.
Cerruela García, G; García-Pedrajas, N; Luque Ruiz, I; Gómez-Nieto, M Á
2018-03-01
This paper proposes a method for molecular activity prediction in QSAR studies using ensembles of classifiers constructed by means of two supervised subspace projection methods, namely nonparametric discriminant analysis (NDA) and hybrid discriminant analysis (HDA). We studied the performance of the proposed ensembles compared to classical ensemble methods using four molecular datasets and eight different models for the representation of the molecular structure. Using several measures and statistical tests for classifier comparison, we observe that our proposal improves the classification results with respect to classical ensemble methods. Therefore, we show that ensembles constructed using supervised subspace projections offer an effective way of creating classifiers in cheminformatics.
Model-based multi-fringe interferometry using Zernike polynomials
NASA Astrophysics Data System (ADS)
Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan
2018-06-01
In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.
Buu, Anne; Williams, L Keoki; Yang, James J
2018-03-01
We propose a new genome-wide association test for mixed binary and continuous phenotypes that uses an efficient numerical method to estimate the empirical distribution of the Fisher's combination statistic under the null hypothesis. Our simulation study shows that the proposed method controls the type I error rate and also maintains its power at the level of the permutation method. More importantly, the computational efficiency of the proposed method is much higher than the one of the permutation method. The simulation results also indicate that the power of the test increases when the genetic effect increases, the minor allele frequency increases, and the correlation between responses decreases. The statistical analysis on the database of the Study of Addiction: Genetics and Environment demonstrates that the proposed method combining multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests.
An automatic step adjustment method for average power analysis technique used in fiber amplifiers
NASA Astrophysics Data System (ADS)
Liu, Xue-Ming
2006-04-01
An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.
NASA Astrophysics Data System (ADS)
Wang, Xuchu; Niu, Yanmin
2011-02-01
Automatic measurement of vessels from fundus images is a crucial step for assessing vessel anomalies in ophthalmological community, where the change in retinal vessel diameters is believed to be indicative of the risk level of diabetic retinopathy. In this paper, a new retinal vessel diameter measurement method by combining vessel orientation estimation and filter response is proposed. Its interesting characteristics include: (1) different from the methods that only fit the vessel profiles, the proposed method extracts more stable and accurate vessel diameter by casting this problem as a maximal response problem of a variation of Gabor filter; (2) the proposed method can directly and efficiently estimate the vessel's orientation, which is usually captured by time-consuming multi-orientation fitting techniques in many existing methods. Experimental results shows that the proposed method both retains the computational simplicity and achieves stable and accurate estimation results.
Guided SAR image despeckling with probabilistic non local weights
NASA Astrophysics Data System (ADS)
Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny
2017-12-01
SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.
a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors
NASA Astrophysics Data System (ADS)
Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.
2018-04-01
Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.
3-D surface profilometry based on modulation measurement by applying wavelet transform method
NASA Astrophysics Data System (ADS)
Zhong, Min; Chen, Feng; Xiao, Chao; Wei, Yongchao
2017-01-01
A new analysis of 3-D surface profilometry based on modulation measurement technique by the application of Wavelet Transform method is proposed. As a tool excelling for its multi-resolution and localization in the time and frequency domains, Wavelet Transform method with good localized time-frequency analysis ability and effective de-noizing capacity can extract the modulation distribution more accurately than Fourier Transform method. Especially for the analysis of complex object, more details of the measured object can be well remained. In this paper, the theoretical derivation of Wavelet Transform method that obtains the modulation values from a captured fringe pattern is given. Both computer simulation and elementary experiment are used to show the validity of the proposed method by making a comparison with the results of Fourier Transform method. The results show that the Wavelet Transform method has a better performance than the Fourier Transform method in modulation values retrieval.
Blessy, S A Praylin Selva; Sulochana, C Helen
2015-01-01
Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.
An improved method for pancreas segmentation using SLIC and interactive region merging
NASA Astrophysics Data System (ADS)
Zhang, Liyuan; Yang, Huamin; Shi, Weili; Miao, Yu; Li, Qingliang; He, Fei; He, Wei; Li, Yanfang; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang
2017-03-01
Considering the weak edges in pancreas segmentation, this paper proposes a new solution which integrates more features of CT images by combining SLIC superpixels and interactive region merging. In the proposed method, Mahalanobis distance is first utilized in SLIC method to generate better superpixel images. By extracting five texture features and one gray feature, the similarity measure between two superpixels becomes more reliable in interactive region merging. Furthermore, object edge blocks are accurately addressed by re-segmentation merging process. Applying the proposed method to four cases of abdominal CT images, we segment pancreatic tissues to verify the feasibility and effectiveness. The experimental results show that the proposed method can make segmentation accuracy increase to 92% on average. This study will boost the application process of pancreas segmentation for computer-aided diagnosis system.
A new method for calculating ecological flow: Distribution flow method
NASA Astrophysics Data System (ADS)
Tan, Guangming; Yi, Ran; Chang, Jianbo; Shu, Caiwen; Yin, Zhi; Han, Shasha; Feng, Zhiyong; Lyu, Yiwei
2018-04-01
A distribution flow method (DFM) and its ecological flow index and evaluation grade standard are proposed to study the ecological flow of rivers based on broadening kernel density estimation. The proposed DFM and its ecological flow index and evaluation grade standard are applied into the calculation of ecological flow in the middle reaches of the Yangtze River and compared with traditional calculation method of hydrological ecological flow, method of flow evaluation, and calculation result of fish ecological flow. Results show that the DFM considers the intra- and inter-annual variations in natural runoff, thereby reducing the influence of extreme flow and uneven flow distributions during the year. This method also satisfies the actual runoff demand of river ecosystems, demonstrates superiority over the traditional hydrological methods, and shows a high space-time applicability and application value.
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
Broad phonetic class definition driven by phone confusions
NASA Astrophysics Data System (ADS)
Lopes, Carla; Perdigão, Fernando
2012-12-01
Intermediate representations between the speech signal and phones may be used to improve discrimination among phones that are often confused. These representations are usually found according to broad phonetic classes, which are defined by a phonetician. This article proposes an alternative data-driven method to generate these classes. Phone confusion information from the analysis of the output of a phone recognition system is used to find clusters at high risk of mutual confusion. A metric is defined to compute the distance between phones. The results, using TIMIT data, show that the proposed confusion-driven phone clustering method is an attractive alternative to the approaches based on human knowledge. A hierarchical classification structure to improve phone recognition is also proposed using a discriminative weight training method. Experiments show improvements in phone recognition on the TIMIT database compared to a baseline system.
Liu, Liangbing; Tao, Chao; Liu, XiaoJun; Deng, Mingxi; Wang, Senhua; Liu, Jun
2015-10-19
Photoacoustic tomography is a promising and rapidly developed methodology of biomedical imaging. It confronts an increasing urgent problem to reconstruct the image from weak and noisy photoacoustic signals, owing to its high benefit in extending the imaging depth and decreasing the dose of laser exposure. Based on the time-domain characteristics of photoacoustic signals, a pulse decomposition algorithm is proposed to reconstruct a photoacoustic image from signals with low signal-to-noise ratio. In this method, a photoacoustic signal is decomposed as the weighted summation of a set of pulses in the time-domain. Images are reconstructed from the weight factors, which are directly related to the optical absorption coefficient. Both simulation and experiment are conducted to test the performance of the method. Numerical simulations show that when the signal-to-noise ratio is -4 dB, the proposed method decreases the reconstruction error to about 17%, in comparison with the conventional back-projection method. Moreover, it can produce acceptable images even when the signal-to-noise ratio is decreased to -10 dB. Experiments show that, when the laser influence level is low, the proposed method achieves a relatively clean image of a hair phantom with some well preserved pattern details. The proposed method demonstrates imaging potential of photoacoustic tomography in expanding applications.
Overlapping communities from dense disjoint and high total degree clusters
NASA Astrophysics Data System (ADS)
Zhang, Hongli; Gao, Yang; Zhang, Yue
2018-04-01
Community plays an important role in the field of sociology, biology and especially in domains of computer science, where systems are often represented as networks. And community detection is of great importance in the domains. A community is a dense subgraph of the whole graph with more links between its members than between its members to the outside nodes, and nodes in the same community probably share common properties or play similar roles in the graph. Communities overlap when nodes in a graph belong to multiple communities. A vast variety of overlapping community detection methods have been proposed in the literature, and the local expansion method is one of the most successful techniques dealing with large networks. The paper presents a density-based seeding method, in which dense disjoint local clusters are searched and selected as seeds. The proposed method selects a seed by the total degree and density of local clusters utilizing merely local structures of the network. Furthermore, this paper proposes a novel community refining phase via minimizing the conductance of each community, through which the quality of identified communities is largely improved in linear time. Experimental results in synthetic networks show that the proposed seeding method outperforms other seeding methods in the state of the art and the proposed refining method largely enhances the quality of the identified communities. Experimental results in real graphs with ground-truth communities show that the proposed approach outperforms other state of the art overlapping community detection algorithms, in particular, it is more than two orders of magnitude faster than the existing global algorithms with higher quality, and it obtains much more accurate community structure than the current local algorithms without any priori information.
Nonlocal means-based speckle filtering for ultrasound images
Coupé, Pierrick; Hellier, Pierre; Kervrann, Charles; Barillot, Christian
2009-01-01
In image processing, restoration is expected to improve the qualitative inspection of the image and the performance of quantitative image analysis techniques. In this paper, an adaptation of the Non Local (NL-) means filter is proposed for speckle reduction in ultrasound (US) images. Originally developed for additive white Gaussian noise, we propose to use a Bayesian framework to derive a NL-means filter adapted to a relevant ultrasound noise model. Quantitative results on synthetic data show the performances of the proposed method compared to well-established and state-of-the-art methods. Results on real images demonstrate that the proposed method is able to preserve accurately edges and structural details of the image. PMID:19482578
Li, Jing; Zhang, Miao; Chen, Lin; Cai, Congbo; Sun, Huijun; Cai, Shuhui
2015-06-01
We employ an amplitude-modulated chirp pulse to selectively excite spins in one or more regions of interest (ROIs) to realize reduced field-of-view (rFOV) imaging based on single-shot spatiotemporally encoded (SPEN) sequence and Fourier transform reconstruction. The proposed rFOV imaging method was theoretically analyzed and illustrated with numerical simulation and tested with phantom experiments and in vivo rat experiments. In addition, point spread function was applied to demonstrate the feasibility of the proposed method. To evaluate the proposed method, the rFOV results were compared with those obtained using the EPI method with orthogonal RF excitation. The simulation and experimental results show that the proposed method can image one or two separated ROIs along the SPEN dimension in a single shot with higher spatial resolution, less sensitive to field inhomogeneity, and practically no aliasing artifacts. In addition, the proposed method may produce rFOV images with comparable signal-to-noise ratio to the rFOV EPI images. The proposed method is promising for the applications under severe susceptibility heterogeneities and for imaging separate ROIs simultaneously. Copyright © 2015 Elsevier Inc. All rights reserved.
Real-Time GNSS-Based Attitude Determination in the Measurement Domain.
Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun
2017-02-05
A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance.
Yoon, Ki Young; Park, Chul Woo; Byeon, Jeong Hoon; Hwang, Jungho
2010-03-01
We proposed a rapid method to estimate the efficacies of air controlling devices in situ using ATP bioluminescence in combination with an inertial impactor. The inertial impactor was designed to have 1 mum of cutoff diameter, and its performance was estimated analytically, numerically, and experimentally. The proposed method was characterized using Staphylococcus epidermidis, which was aerosolized with a nebulizer. The bioaerosol concentrations were estimated within 25 min using the proposed method without a culturing process, which requires several days for colony formation. A linear relationship was obtained between the results of the proposed ATP method (RLU/m(3)) and the conventional culture-based method (CFU/m(3)), with R(2) 0.9283. The proposed method was applied to estimate the concentration of indoor bioaerosols, which were identified as a mixture of various microbial species including bacteria, fungi, and actinomycetes, in an occupational indoor environment, controlled by mechanical ventilation and an air cleaner. Consequently, the proposed method showed a linearity with the culture-based method for indoor bioaerosols with R(2) 0.8189, even though various kinds of microorganisms existed in the indoor air. The proposed method may be effective in monitoring the changes of relative concentration of indoor bioaerosols and estimating the effectiveness of air control devices in indoor environments.
Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.
Kim, Eunwoo; Park, HyunWook
2017-02-01
The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.
Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.
Zhang, Jianguang; Jiang, Jianmin
2018-02-01
While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.
Ultrasound image edge detection based on a novel multiplicative gradient and Canny operator.
Zheng, Yinfei; Zhou, Yali; Zhou, Hao; Gong, Xiaohong
2015-07-01
To achieve the fast and accurate segmentation of ultrasound image, a novel edge detection method for speckle noised ultrasound images was proposed, which was based on the traditional Canny and a novel multiplicative gradient operator. The proposed technique combines a new multiplicative gradient operator of non-Newtonian type with the traditional Canny operator to generate the initial edge map, which is subsequently optimized by the following edge tracing step. To verify the proposed method, we compared it with several other edge detection methods that had good robustness to noise, with experiments on the simulated and in vivo medical ultrasound image. Experimental results showed that the proposed algorithm has higher speed for real-time processing, and the edge detection accuracy could be 75% or more. Thus, the proposed method is very suitable for fast and accurate edge detection of medical ultrasound images. © The Author(s) 2014.
Kim, Dong-Kyu; Park, Won-Woong; Lee, Ho Won; Kang, Seong-Hoon; Im, Yong-Taek
2013-12-01
In this study, a rigorous methodology for quantifying recrystallization kinetics by electron backscatter diffraction is proposed in order to reduce errors associated with the operator's skill. An adaptive criterion to determine adjustable grain orientation spread depending on the recrystallization stage is proposed to better identify the recrystallized grains in the partially recrystallized microstructure. The proposed method was applied in characterizing the microstructure evolution during annealing of interstitial-free steel cold rolled to low and high true strain levels of 0.7 and 1.6, respectively. The recrystallization kinetics determined by the proposed method was found to be consistent with the standard method of Vickers microhardness. The application of the proposed method to the overall recrystallization stages showed that it can be used for the rigorous characterization of progressive microstructure evolution, especially for the severely deformed material. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Robust volcano plot: identification of differential metabolites in the presence of outliers.
Kumar, Nishith; Hoque, Md Aminul; Sugimoto, Masahiro
2018-04-11
The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers. We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites. Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano .
Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2015-05-01
This paper focuses on a class of reinforcement learning (RL) algorithms, named integral RL (I-RL), that solve continuous-time (CT) nonlinear optimal control problems with input-affine system dynamics. First, we extend the concepts of exploration, integral temporal difference, and invariant admissibility to the target CT nonlinear system that is governed by a control policy plus a probing signal called an exploration. Then, we show input-to-state stability (ISS) and invariant admissibility of the closed-loop systems with the policies generated by integral policy iteration (I-PI) or invariantly admissible PI (IA-PI) method. Based on these, three online I-RL algorithms named explorized I-PI and integral Q -learning I, II are proposed, all of which generate the same convergent sequences as I-PI and IA-PI under the required excitation condition on the exploration. All the proposed methods are partially or completely model free, and can simultaneously explore the state space in a stable manner during the online learning processes. ISS, invariant admissibility, and convergence properties of the proposed methods are also investigated, and related with these, we show the design principles of the exploration for safe learning. Neural-network-based implementation methods for the proposed schemes are also presented in this paper. Finally, several numerical simulations are carried out to verify the effectiveness of the proposed methods.
Multiview echocardiography fusion using an electromagnetic tracking system.
Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; Paakkanen, Riitta; Khan, Nehan; Noga, Michelle; Boulanger, Pierre; Becher, Harald
2016-08-01
Three-dimensional ultrasound is an emerging modality for the assessment of complex cardiac anatomy and function. The advantages of this modality include lack of ionizing radiation, portability, low cost, and high temporal resolution. Major limitations include limited field-of-view, reliance on frequently limited acoustic windows, and poor signal to noise ratio. This study proposes a novel approach to combine multiple views into a single image using an electromagnetic tracking system in order to improve the field-of-view. The novel method has several advantages: 1) it does not rely on image information for alignment, and therefore, the method does not require image overlap; 2) the alignment accuracy of the proposed approach is not affected by any poor image quality as in the case of image registration based approaches; 3) in contrast to previous optical tracking based system, the proposed approach does not suffer from line-of-sight limitation; and 4) it does not require any initial calibration. In this pilot project, we were able to show that using a heart phantom, our method can fuse multiple echocardiographic images and improve the field-of view. Quantitative evaluations showed that the proposed method yielded a nearly optimal alignment of image data sets in three-dimensional space. The proposed method demonstrates the electromagnetic system can be used for the fusion of multiple echocardiography images with a seamless integration of sensors to the transducer.
An Automated Baseline Correction Method Based on Iterative Morphological Operations.
Chen, Yunliang; Dai, Liankui
2018-05-01
Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.
A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image
Li, Li; Hou, Qingzheng; Lu, Jianfeng; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen
2014-01-01
Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. PMID:25489606
PSQP: Puzzle Solving by Quadratic Programming.
Andalo, Fernanda A; Taubin, Gabriel; Goldenstein, Siome
2017-02-01
In this article we present the first effective method based on global optimization for the reconstruction of image puzzles comprising rectangle pieces-Puzzle Solving by Quadratic Programming (PSQP). The proposed novel mathematical formulation reduces the problem to the maximization of a constrained quadratic function, which is solved via a gradient ascent approach. The proposed method is deterministic and can deal with arbitrary identical rectangular pieces. We provide experimental results showing its effectiveness when compared to state-of-the-art approaches. Although the method was developed to solve image puzzles, we also show how to apply it to the reconstruction of simulated strip-shredded documents, broadening its applicability.
Du, Shouqiang; Chen, Miao
2018-01-01
We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Osman, Essam Eldin A.
2018-01-01
This work represents a comparative study of different approaches of manipulating ratio spectra, applied on a binary mixture of ciprofloxacin HCl and dexamethasone sodium phosphate co-formulated as ear drops. The proposed new spectrophotometric methods are: ratio difference spectrophotometric method (RDSM), amplitude center method (ACM), first derivative of the ratio spectra (1DD) and mean centering of ratio spectra (MCR). The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitations and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.
A fast and automatic mosaic method for high-resolution satellite images
NASA Astrophysics Data System (ADS)
Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing
2015-12-01
We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.
A BiCGStab2 variant of the IDR(s) method for solving linear equations
NASA Astrophysics Data System (ADS)
Abe, Kuniyoshi; Sleijpen, Gerard L. G.
2012-09-01
The hybrid Bi-Conjugate Gradient (Bi-CG) methods, such as the BiCG STABilized (BiCGSTAB), BiCGstab(l), BiCGStab2 and BiCG×MR2 methods are well-known solvers for solving a linear equation with a nonsymmetric matrix. The Induced Dimension Reduction (IDR)(s) method has recently been proposed, and it has been reported that IDR(s) is often more effective than the hybrid BiCG methods. IDR(s) combining the stabilization polynomial of BiCGstab(l) has been designed to improve the convergence of the original IDR(s) method. We therefore propose IDR(s) combining the stabilization polynomial of BiCGStab2. Numerical experiments show that our proposed variant of IDR(s) is more effective than the original IDR(s) and BiCGStab2 methods.
Use of a New Set of Linguistic Features to Improve Automatic Assessment of Text Readability
ERIC Educational Resources Information Center
Yoshimi, Takehiko; Kotani, Katsunori; Isahara, Hitoshi
2012-01-01
The present paper proposes and evaluates a readability assessment method designed for Japanese learners of EFL (English as a foreign language). The proposed readability assessment method is constructed by a regression algorithm using a new set of linguistic features that were employed separately in previous studies. The results showed that the…
Nonlinear adaptive inverse control via the unified model neural network
NASA Astrophysics Data System (ADS)
Jeng, Jin-Tsong; Lee, Tsu-Tian
1999-03-01
In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
An adaptive random search for short term generation scheduling with network constraints.
Marmolejo, J A; Velasco, Jonás; Selley, Héctor J
2017-01-01
This paper presents an adaptive random search approach to address a short term generation scheduling with network constraints, which determines the startup and shutdown schedules of thermal units over a given planning horizon. In this model, we consider the transmission network through capacity limits and line losses. The mathematical model is stated in the form of a Mixed Integer Non Linear Problem with binary variables. The proposed heuristic is a population-based method that generates a set of new potential solutions via a random search strategy. The random search is based on the Markov Chain Monte Carlo method. The main key of the proposed method is that the noise level of the random search is adaptively controlled in order to exploring and exploiting the entire search space. In order to improve the solutions, we consider coupling a local search into random search process. Several test systems are presented to evaluate the performance of the proposed heuristic. We use a commercial optimizer to compare the quality of the solutions provided by the proposed method. The solution of the proposed algorithm showed a significant reduction in computational effort with respect to the full-scale outer approximation commercial solver. Numerical results show the potential and robustness of our approach.
Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
2016-10-05
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.
A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks
Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong
2016-01-01
In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298
Selection-Fusion Approach for Classification of Datasets with Missing Values
Ghannad-Rezaie, Mostafa; Soltanian-Zadeh, Hamid; Ying, Hao; Dong, Ming
2010-01-01
This paper proposes a new approach based on missing value pattern discovery for classifying incomplete data. This approach is particularly designed for classification of datasets with a small number of samples and a high percentage of missing values where available missing value treatment approaches do not usually work well. Based on the pattern of the missing values, the proposed approach finds subsets of samples for which most of the features are available and trains a classifier for each subset. Then, it combines the outputs of the classifiers. Subset selection is translated into a clustering problem, allowing derivation of a mathematical framework for it. A trade off is established between the computational complexity (number of subsets) and the accuracy of the overall classifier. To deal with this trade off, a numerical criterion is proposed for the prediction of the overall performance. The proposed method is applied to seven datasets from the popular University of California, Irvine data mining archive and an epilepsy dataset from Henry Ford Hospital, Detroit, Michigan (total of eight datasets). Experimental results show that classification accuracy of the proposed method is superior to those of the widely used multiple imputations method and four other methods. They also show that the level of superiority depends on the pattern and percentage of missing values. PMID:20212921
Testing for intracycle determinism in pseudoperiodic time series.
Coelho, Mara C S; Mendes, Eduardo M A M; Aguirre, Luis A
2008-06-01
A determinism test is proposed based on the well-known method of the surrogate data. Assuming predictability to be a signature of determinism, the proposed method checks for intracycle (e.g., short-term) determinism in the pseudoperiodic time series for which standard methods of surrogate analysis do not apply. The approach presented is composed of two steps. First, the data are preprocessed to reduce the effects of seasonal and trend components. Second, standard tests of surrogate analysis can then be used. The determinism test is applied to simulated and experimental pseudoperiodic time series and the results show the applicability of the proposed test.
A KARAOKE System Singing Evaluation Method that More Closely Matches Human Evaluation
NASA Astrophysics Data System (ADS)
Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo
KARAOKE is a popular amusement for old and young. Many KARAOKE machines have singing evaluation function. However, it is often said that the scores given by KARAOKE machines do not match human evaluation. In this paper a KARAOKE scoring method strongly correlated with human evaluation is proposed. This paper proposes a way to evaluate songs based on the distance between singing pitch and musical scale, employing a vibrato extraction method based on template matching of spectrum. The results show that correlation coefficients between scores given by the proposed system and human evaluation are -0.76∼-0.89.
NASA Astrophysics Data System (ADS)
Chen, Ming-Chih; Hsiao, Shen-Fu
In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.
NASA Astrophysics Data System (ADS)
Khan, Baseem; Agnihotri, Ganga; Mishra, Anuprita S.
2016-03-01
In the present work authors proposed a novel method for transmission loss and cost allocation to users (generators and loads). In the developed methodology transmission losses are allocated to users based on their usage of the transmission line. After usage allocation, particular loss allocation indices (PLAI) are evaluated for loads and generators. Also Cooperative game theory approach is applied for comparison of results. The proposed method is simple and easy to implement on the practical power system. Sample 6 bus and IEEE 14 bus system is used for showing the effectiveness of proposed method.
Hybrid ARQ Scheme with Autonomous Retransmission for Multicasting in Wireless Sensor Networks.
Jung, Young-Ho; Choi, Jihoon
2017-02-25
A new hybrid automatic repeat request (HARQ) scheme for multicast service for wireless sensor networks is proposed in this study. In the proposed algorithm, the HARQ operation is combined with an autonomous retransmission method that ensure a data packet is transmitted irrespective of whether or not the packet is successfully decoded at the receivers. The optimal number of autonomous retransmissions is determined to ensure maximum spectral efficiency, and a practical method that adjusts the number of autonomous retransmissions for realistic conditions is developed. Simulation results show that the proposed method achieves higher spectral efficiency than existing HARQ techniques.
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2013-06-17
We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as "our previous method") using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as "our new method"). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.
Cell tracking for cell image analysis
NASA Astrophysics Data System (ADS)
Bise, Ryoma; Sato, Yoichi
2017-04-01
Cell image analysis is important for research and discovery in biology and medicine. In this paper, we present our cell tracking methods, which is capable of obtaining fine-grain cell behavior metrics. In order to address difficulties under dense culture conditions, where cell detection cannot be done reliably since cell often touch with blurry intercellular boundaries, we proposed two methods which are global data association and jointly solving cell detection and association. We also show the effectiveness of the proposed methods by applying the method to the biological researches.
A new Newton-like method for solving nonlinear equations.
Saheya, B; Chen, Guo-Qing; Sui, Yun-Kang; Wu, Cai-Ying
2016-01-01
This paper presents an iterative scheme for solving nonline ar equations. We establish a new rational approximation model with linear numerator and denominator which has generalizes the local linear model. We then employ the new approximation for nonlinear equations and propose an improved Newton's method to solve it. The new method revises the Jacobian matrix by a rank one matrix each iteration and obtains the quadratic convergence property. The numerical performance and comparison show that the proposed method is efficient.
NASA Astrophysics Data System (ADS)
Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong
2018-03-01
The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that
GHM method for obtaining rationalsolutions of nonlinear differential equations.
Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo
2015-01-01
In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.
Statistical inference for the additive hazards model under outcome-dependent sampling.
Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo
2015-09-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.
Statistical inference for the additive hazards model under outcome-dependent sampling
Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo
2015-01-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363
A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges
Asgari, B.; Osman, S. A.; Adnan, A.
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400
A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.
Asgari, B; Osman, S A; Adnan, A
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.
The Teacher's Guide to Winning Grants.
ERIC Educational Resources Information Center
Bauer, David G.
This step-by-step primer helps teachers develop effective grantseeking methods that will enable them to secure funds. It shows teachers how to select the right funding sources, organize proposal ideas, write a convincing and well-prepared proposal, identify who will evaluate the proposal and the scoring system they will use, and efficiently…
Comparative study of signalling methods for high-speed backplane transceiver
NASA Astrophysics Data System (ADS)
Wu, Kejun
2017-11-01
A combined analysis of transient simulation and statistical method is proposed for comparative study of signalling methods applied to high-speed backplane transceivers. This method enables fast and accurate signal-to-noise ratio and symbol error rate estimation of a serial link based on a four-dimension design space, including channel characteristics, noise scenarios, equalisation schemes, and signalling methods. The proposed combined analysis method chooses an efficient sampling size for performance evaluation. A comparative study of non-return-to-zero (NRZ), PAM-4, and four-phase shifted sinusoid symbol (PSS-4) using parameterised behaviour-level simulation shows PAM-4 and PSS-4 has substantial advantages over conventional NRZ in most of the cases. A comparison between PAM-4 and PSS-4 shows PAM-4 gets significant bit error rate degradation when noise level is enhanced.
Cai, Jian-Hua
2017-09-01
To eliminate the random error of the derivative near-IR (NIR) spectrum and to improve model stability and the prediction accuracy of the gluten protein content, a combined method is proposed for pretreatment of the NIR spectrum based on both empirical mode decomposition and the wavelet soft-threshold method. The principle and the steps of the method are introduced and the denoising effect is evaluated. The wheat gluten protein content is calculated based on the denoised spectrum, and the results are compared with those of the nine-point smoothing method and the wavelet soft-threshold method. Experimental results show that the proposed combined method is effective in completing pretreatment of the NIR spectrum, and the proposed method improves the accuracy of detection of wheat gluten protein content from the NIR spectrum.
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Hegazy, Maha Abdel Monem
2013-09-01
Four simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of simvastatin (SM) and ezetimibe (EZ) namely; extended ratio subtraction (EXRSM), simultaneous ratio subtraction (SRSM), ratio difference (RDSM) and absorption factor (AFM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined, and the methods were validated and the specificity was assessed by analyzing synthetic mixtures containing the cited drugs. The four methods were applied for the determination of the cited drugs in tablets and the obtained results were statistically compared with each other and with those of a reported HPLC method. The comparison showed that there is no significant difference between the proposed methods and the reported method regarding both accuracy and precision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Jianbo, E-mail: jianbocui@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Liu, Zhihui, E-mail: liuzhihui@lsec.cc.ac.cn
We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.
Acoustic pressure measurement of pulsed ultrasound using acousto-optic diffraction
NASA Astrophysics Data System (ADS)
Jia, Lecheng; Chen, Shili; Xue, Bin; Wu, Hanzhong; Zhang, Kai; Yang, Xiaoxia; Zeng, Zhoumo
2018-01-01
Compared with continuous ultrasound wave, pulsed ultrasound has been widely used in ultrasound imaging. The aim of this work is to show the applicability of acousto-optic diffraction on pulsed ultrasound transducer. In this paper, acoustic pressure of two ultrasound transducers is measured based on Raman-Nath diffraction. The frequencies of transducers are 5MHz and 10MHz. The pulse-echo method and simulation data are used to evaluate the results. The results show that the proposed method is capable to measure the absolute sound pressure. We get a sectional view of acoustic pressure using a displacement platform as an auxiliary. Compared with the traditional sound pressure measurement methods, the proposed method is non-invasive with high sensitivity and spatial resolution.
A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.
Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun
2017-07-01
Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.
Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem
NASA Astrophysics Data System (ADS)
Omagari, Hiroki; Higashino, Shin-Ichiro
2018-04-01
In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.
Nariai, N; Kim, S; Imoto, S; Miyano, S
2004-01-01
We propose a statistical method to estimate gene networks from DNA microarray data and protein-protein interactions. Because physical interactions between proteins or multiprotein complexes are likely to regulate biological processes, using only mRNA expression data is not sufficient for estimating a gene network accurately. Our method adds knowledge about protein-protein interactions to the estimation method of gene networks under a Bayesian statistical framework. In the estimated gene network, a protein complex is modeled as a virtual node based on principal component analysis. We show the effectiveness of the proposed method through the analysis of Saccharomyces cerevisiae cell cycle data. The proposed method improves the accuracy of the estimated gene networks, and successfully identifies some biological facts.
[A graph cuts-based interactive method for segmentation of magnetic resonance images of meningioma].
Li, Shuan-qiang; Feng, Qian-jin; Chen, Wu-fan; Lin, Ya-zhong
2011-06-01
For accurate segmentation of the magnetic resonance (MR) images of meningioma, we propose a novel interactive segmentation method based on graph cuts. The high dimensional image features was extracted, and for each pixel, the probabilities of its origin, either the tumor or the background regions, were estimated by exploiting the weighted K-nearest neighborhood classifier. Based on these probabilities, a new energy function was proposed. Finally, a graph cut optimal framework was used for the solution of the energy function. The proposed method was evaluated by application in the segmentation of MR images of meningioma, and the results showed that the method significantly improved the segmentation accuracy compared with the gray level information-based graph cut method.
Li, Xu; Xia, Rongmin; He, Bin
2008-01-01
A new tomographic algorithm for reconstructing a curl-free vector field, whose divergence serves as acoustic source is proposed. It is shown that under certain conditions, the scalar acoustic measurements obtained from a surface enclosing the source area can be vectorized according to the known measurement geometry and then be used to reconstruct the vector field. The proposed method is validated by numerical experiments. This method can be easily applied to magnetoacoustic tomography with magnetic induction (MAT-MI). A simulation study of applying this method to MAT-MI shows that compared to existing methods, the proposed method can give an accurate estimation of the induced current distribution and a better reconstruction of electrical conductivity within an object.
Detecting text in natural scenes with multi-level MSER and SWT
NASA Astrophysics Data System (ADS)
Lu, Tongwei; Liu, Renjun
2018-04-01
The detection of the characters in the natural scene is susceptible to factors such as complex background, variable viewing angle and diverse forms of language, which leads to poor detection results. Aiming at these problems, a new text detection method was proposed, which consisted of two main stages, candidate region extraction and text region detection. At first stage, the method used multiple scale transformations of original image and multiple thresholds of maximally stable extremal regions (MSER) to detect the text regions which could detect character regions comprehensively. At second stage, obtained SWT maps by using the stroke width transform (SWT) algorithm to compute the candidate regions, then using cascaded classifiers to propose non-text regions. The proposed method was evaluated on the standard benchmark datasets of ICDAR2011 and the datasets that we made our own data sets. The experiment results showed that the proposed method have greatly improved that compared to other text detection methods.
NASA Astrophysics Data System (ADS)
Zhang, Yi; Wu, Yulong; Yan, Jianguo; Wang, Haoran; Rodriguez, J. Alexis P.; Qiu, Yue
2018-04-01
In this paper, we propose an inverse method for full gravity gradient tensor data in the spherical coordinate system. As opposed to the traditional gravity inversion in the Cartesian coordinate system, our proposed method takes the curvature of the Earth, the Moon, or other planets into account, using tesseroid bodies to produce gravity gradient effects in forward modeling. We used both synthetic and observed datasets to test the stability and validity of the proposed method. Our results using synthetic gravity data show that our new method predicts the depth of the density anomalous body efficiently and accurately. Using observed gravity data for the Mare Smythii area on the moon, the density distribution of the crust in this area reveals its geological structure. These results validate the proposed method and potential application for large area data inversion of planetary geological structures.[Figure not available: see fulltext.
Preconditioned alternating direction method of multipliers for inverse problems with constraints
NASA Astrophysics Data System (ADS)
Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie
2017-02-01
We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.
Hybrid recommendation methods in complex networks.
Fiasconaro, A; Tumminello, M; Nicosia, V; Latora, V; Mantegna, R N
2015-07-01
We propose two recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three data sets, and we compare the performance of our methods to other recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow us to attain an improvement of performances of up to 20% with respect to existing nonparametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we study how an increasing presence of random links in the network affects the recommendation scores, finding that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets.
Exploiting salient semantic analysis for information retrieval
NASA Astrophysics Data System (ADS)
Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui
2016-11-01
Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.
Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion
NASA Astrophysics Data System (ADS)
Zou, Cuiming; Kou, Kit Ian
2018-05-01
Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir
2014-08-01
In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show themore » accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.« less
Onojima, Takayuki; Goto, Takahiro; Mizuhara, Hiroaki; Aoyagi, Toshio
2018-01-01
Synchronization of neural oscillations as a mechanism of brain function is attracting increasing attention. Neural oscillation is a rhythmic neural activity that can be easily observed by noninvasive electroencephalography (EEG). Neural oscillations show the same frequency and cross-frequency synchronization for various cognitive and perceptual functions. However, it is unclear how this neural synchronization is achieved by a dynamical system. If neural oscillations are weakly coupled oscillators, the dynamics of neural synchronization can be described theoretically using a phase oscillator model. We propose an estimation method to identify the phase oscillator model from real data of cross-frequency synchronized activities. The proposed method can estimate the coupling function governing the properties of synchronization. Furthermore, we examine the reliability of the proposed method using time-series data obtained from numerical simulation and an electronic circuit experiment, and show that our method can estimate the coupling function correctly. Finally, we estimate the coupling function between EEG oscillation and the speech sound envelope, and discuss the validity of these results.
A Lift-Off-Tolerant Magnetic Flux Leakage Testing Method for Drill Pipes at Wellhead.
Wu, Jianbo; Fang, Hui; Li, Long; Wang, Jie; Huang, Xiaoming; Kang, Yihua; Sun, Yanhua; Tang, Chaoqing
2017-01-21
To meet the great needs for MFL (magnetic flux leakage) inspection of drill pipes at wellheads, a lift-off-tolerant MFL testing method is proposed and investigated in this paper. Firstly, a Helmholtz coil magnetization method and the whole MFL testing scheme are proposed. Then, based on the magnetic field focusing effect of ferrite cores, a lift-off-tolerant MFL sensor is developed and tested. It shows high sensitivity at a lift-off distance of 5.0 mm. Further, the follow-up high repeatability MFL probing system is designed and manufactured, which was embedded with the developed sensors. It can track the swing movement of drill pipes and allow the pipe ends to pass smoothly. Finally, the developed system is employed in a drilling field for drill pipe inspection. Test results show that the proposed method can fulfill the requirements for drill pipe inspection at wellheads, which is of great importance in drill pipe safety.
A Lift-Off-Tolerant Magnetic Flux Leakage Testing Method for Drill Pipes at Wellhead
Wu, Jianbo; Fang, Hui; Li, Long; Wang, Jie; Huang, Xiaoming; Kang, Yihua; Sun, Yanhua; Tang, Chaoqing
2017-01-01
To meet the great needs for MFL (magnetic flux leakage) inspection of drill pipes at wellheads, a lift-off-tolerant MFL testing method is proposed and investigated in this paper. Firstly, a Helmholtz coil magnetization method and the whole MFL testing scheme are proposed. Then, based on the magnetic field focusing effect of ferrite cores, a lift-off-tolerant MFL sensor is developed and tested. It shows high sensitivity at a lift-off distance of 5.0 mm. Further, the follow-up high repeatability MFL probing system is designed and manufactured, which was embedded with the developed sensors. It can track the swing movement of drill pipes and allow the pipe ends to pass smoothly. Finally, the developed system is employed in a drilling field for drill pipe inspection. Test results show that the proposed method can fulfill the requirements for drill pipe inspection at wellheads, which is of great importance in drill pipe safety. PMID:28117721
NASA Astrophysics Data System (ADS)
Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin
2018-05-01
Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.
Benhammouda, Brahim; Vazquez-Leal, Hector
2016-01-01
This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.
Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko
2013-01-01
The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175
Qian, Liangliang; Li, Ruixian; Di, Qiannan; Shen, Yang; Xu, Qian; Li, Jian
2017-09-01
A method was established for the analysis of nonylphenol (NP) in rat urine samples based on a solid-phase extraction (SPE) procedure with an amino functionalized polyacrylonitrile nanofibers mat (NH 2 -PAN NF S M) as sorbent coupled with high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). The calibration curves prepared in three different days showed good linearity over a wide range of NP concentrations from 0.1 to 100.0ng/mL. It was remarkable that the proposed NH 2 -PAN NFsM based SPE method showed superior extraction efficiency with the consumption of only 4mg of sorbent and 500μL of eluent. The eluent without any further concentration was directly analyzed by HPLC-MS/MS. As a result, a simple and effective sample preparation was achieved. In addition, the notable lower detection limit (LOD) of 0.03ng/mL revealed the excellent sensitivity of the proposed method in comparison with that in literatures. The recoveries ranged from 85.0% to 114.8% with the relative standard deviations (RSDs) ranging from 7.5% to 13.7%, which were better than or comparable to those from the published methods, suggesting high accuracy of the proposed method. The proposed method was applied in primary study on the disposition of nonylphenol after long-term low-level exposure in rats, providing information for health risk assessment on the real scenarios of NP exposure. NH 2 -PAN NFsM shows great potential as a novel SPE sorbent for the analysis of biological samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatial Mutual Information Based Hyperspectral Band Selection for Classification
2015-01-01
The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742
Android malware detection based on evolutionary super-network
NASA Astrophysics Data System (ADS)
Yan, Haisheng; Peng, Lingling
2018-04-01
In the paper, an android malware detection method based on evolutionary super-network is proposed in order to improve the precision of android malware detection. Chi square statistics method is used for selecting characteristics on the basis of analyzing android authority. Boolean weighting is utilized for calculating characteristic weight. Processed characteristic vector is regarded as the system training set and test set; hyper edge alternative strategy is used for training super-network classification model, thereby classifying test set characteristic vectors, and it is compared with traditional classification algorithm. The results show that the detection method proposed in the paper is close to or better than traditional classification algorithm. The proposed method belongs to an effective Android malware detection means.
Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Baleanu, D.; Hafez, R. M.
2013-09-01
In this paper, we propose the shifted Jacobi-Gauss collocation spectral method for solving initial value problems of Bratu type, which is widely applicable in fuel ignition of the combustion theory and heat transfer. The spatial approximation is based on shifted Jacobi polynomials J {/n (α,β)}( x) with α, β ∈ (-1, ∞), x ∈ [0, 1] and n the polynomial degree. The shifted Jacobi-Gauss points are used as collocation nodes. Illustrative examples have been discussed to demonstrate the validity and applicability of the proposed technique. Comparing the numerical results of the proposed method with some well-known results show that the method is efficient and gives excellent numerical results.
Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L
2018-01-01
An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei
2011-04-01
Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.
NASA Astrophysics Data System (ADS)
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
Hierarchical Feature Extraction With Local Neural Response for Image Recognition.
Li, Hong; Wei, Yantao; Li, Luoqing; Chen, C L P
2013-04-01
In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.
Space debris detection in optical image sequences.
Xi, Jiangbo; Wen, Desheng; Ersoy, Okan K; Yi, Hongwei; Yao, Dalei; Song, Zongxi; Xi, Shaobo
2016-10-01
We present a high-accuracy, low false-alarm rate, and low computational-cost methodology for removing stars and noise and detecting space debris with low signal-to-noise ratio (SNR) in optical image sequences. First, time-index filtering and bright star intensity enhancement are implemented to remove stars and noise effectively. Then, a multistage quasi-hypothesis-testing method is proposed to detect the pieces of space debris with continuous and discontinuous trajectories. For this purpose, a time-index image is defined and generated. Experimental results show that the proposed method can detect space debris effectively without any false alarms. When the SNR is higher than or equal to 1.5, the detection probability can reach 100%, and when the SNR is as low as 1.3, 1.2, and 1, it can still achieve 99%, 97%, and 85% detection probabilities, respectively. Additionally, two large sets of image sequences are tested to show that the proposed method performs stably and effectively.
Sorting protein decoys by machine-learning-to-rank
Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen
2016-01-01
Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967
Sorting protein decoys by machine-learning-to-rank.
Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen
2016-08-17
Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
NASA Astrophysics Data System (ADS)
Rong, J. H.; Yi, J. H.
2010-10-01
In density-based topological design, one expects that the final result consists of elements either black (solid material) or white (void), without any grey areas. Moreover, one also expects that the optimal topology can be obtained by starting from any initial topology configuration. An improved structural topological optimization method for multi- displacement constraints is proposed in this paper. In the proposed method, the whole optimization process is divided into two optimization adjustment phases and a phase transferring step. Firstly, an optimization model is built to deal with the varied displacement limits, design space adjustments, and reasonable relations between the element stiffness matrix and mass and its element topology variable. Secondly, a procedure is proposed to solve the optimization problem formulated in the first optimization adjustment phase, by starting with a small design space and advancing to a larger deign space. The design space adjustments are automatic when the design domain needs expansions, in which the convergence of the proposed method will not be affected. The final topology obtained by the proposed procedure in the first optimization phase, can approach to the vicinity of the optimum topology. Then, a heuristic algorithm is given to improve the efficiency and make the designed structural topology black/white in both the phase transferring step and the second optimization adjustment phase. And the optimum topology can finally be obtained by the second phase optimization adjustments. Two examples are presented to show that the topologies obtained by the proposed method are of very good 0/1 design distribution property, and the computational efficiency is enhanced by reducing the element number of the design structural finite model during two optimization adjustment phases. And the examples also show that this method is robust and practicable.
Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.
Deep learning and texture-based semantic label fusion for brain tumor segmentation
NASA Astrophysics Data System (ADS)
Vidyaratne, L.; Alam, M.; Shboul, Z.; Iftekharuddin, K. M.
2018-02-01
Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.
Li, Xiaolei; Deng, Lei; Chen, Xiaoman; Cheng, Mengfan; Fu, Songnian; Tang, Ming; Liu, Deming
2017-04-17
A novel automatic bias control (ABC) method for optical in-phase and quadrature (IQ) modulator is proposed and experimentally demonstrated. In the proposed method, two different low frequency sine wave dither signals are generated and added on to the I/Q bias signal respectively. Instead of power monitoring of the harmonics of the dither signal, dither-correlation detection is proposed and used to adjust the bias voltages of the optical IQ modulator. By this way, not only frequency spectral analysis isn't required but also the directional bias adjustment could be realized, resulting in the decrease of algorithm complexity and the growth of convergence rate of ABC algorithm. The results show that the sensitivity of the proposed ABC method outperforms that of the traditional dither frequency monitoring method. Moreover, the proposed ABC method is proved to be modulation-format-free, and the transmission penalty caused by this method for both 10 Gb/s optical QPSK and 17.9 Gb/s optical 16QAM-OFDM signal transmission are negligible in our experiment.
Deep Learning and Texture-Based Semantic Label Fusion for Brain Tumor Segmentation.
Vidyaratne, L; Alam, M; Shboul, Z; Iftekharuddin, K M
2018-01-01
Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.
Novel X-ray Communication Based XNAV Augmentation Method Using X-ray Detectors
Song, Shibin; Xu, Luping; Zhang, Hua; Bai, Yuanjie
2015-01-01
The further development of X-ray pulsar-based NAVigation (XNAV) is hindered by its lack of accuracy, so accuracy improvement has become a critical issue for XNAV. In this paper, an XNAV augmentation method which utilizes both pulsar observation and X-ray ranging observation for navigation filtering is proposed to deal with this issue. As a newly emerged concept, X-ray communication (XCOM) shows great potential in space exploration. X-ray ranging, derived from XCOM, could achieve high accuracy in range measurement, which could provide accurate information for XNAV. For the proposed method, the measurement models of pulsar observation and range measurement observation are established, and a Kalman filtering algorithm based on the observations and orbit dynamics is proposed to estimate the position and velocity of a spacecraft. A performance comparison of the proposed method with the traditional pulsar observation method is conducted by numerical experiments. Besides, the parameters that influence the performance of the proposed method, such as the pulsar observation time, the SNR of the ranging signal, etc., are analyzed and evaluated by numerical experiments. PMID:26404295
Adeniyi, D A; Wei, Z; Yang, Y
2018-01-30
A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.
Radiofrequency pulse design using nonlinear gradient magnetic fields.
Kopanoglu, Emre; Constable, R Todd
2015-09-01
An iterative k-space trajectory and radiofrequency (RF) pulse design method is proposed for excitation using nonlinear gradient magnetic fields. The spatial encoding functions (SEFs) generated by nonlinear gradient fields are linearly dependent in Cartesian coordinates. Left uncorrected, this may lead to flip angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a matching pursuit algorithm, and the RF pulse is designed using a conjugate gradient algorithm. Three variants of the proposed approach are given: the full algorithm, a computationally cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. The method is compared with other iterative (matching pursuit and conjugate gradient) and noniterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity. An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. © 2014 Wiley Periodicals, Inc.
Fast Markerless Tracking for Augmented Reality in Planar Environment
NASA Astrophysics Data System (ADS)
Basori, Ahmad Hoirul; Afif, Fadhil Noer; Almazyad, Abdulaziz S.; AbuJabal, Hamza Ali S.; Rehman, Amjad; Alkawaz, Mohammed Hazim
2015-12-01
Markerless tracking for augmented reality should not only be accurate but also fast enough to provide a seamless synchronization between real and virtual beings. Current reported methods showed that a vision-based tracking is accurate but requires high computational power. This paper proposes a real-time hybrid-based method for tracking unknown environments in markerless augmented reality. The proposed method provides collaboration of vision-based approach with accelerometers and gyroscopes sensors as camera pose predictor. To align the augmentation relative to camera motion, the tracking method is done by substituting feature-based camera estimation with combination of inertial sensors with complementary filter to provide more dynamic response. The proposed method managed to track unknown environment with faster processing time compared to available feature-based approaches. Moreover, the proposed method can sustain its estimation in a situation where feature-based tracking loses its track. The collaboration of sensor tracking managed to perform the task for about 22.97 FPS, up to five times faster than feature-based tracking method used as comparison. Therefore, the proposed method can be used to track unknown environments without depending on amount of features on scene, while requiring lower computational cost.
Detect2Rank: Combining Object Detectors Using Learning to Rank.
Karaoglu, Sezer; Yang Liu; Gevers, Theo
2016-01-01
Object detection is an important research area in the field of computer vision. Many detection algorithms have been proposed. However, each object detector relies on specific assumptions of the object appearance and imaging conditions. As a consequence, no algorithm can be considered universal. With the large variety of object detectors, the subsequent question is how to select and combine them. In this paper, we propose a framework to learn how to combine object detectors. The proposed method uses (single) detectors like Deformable Part Models, Color Names and Ensemble of Exemplar-SVMs, and exploits their correlation by high-level contextual features to yield a combined detection list. Experiments on the PASCAL VOC07 and VOC10 data sets show that the proposed method significantly outperforms single object detectors, DPM (8.4%), CN (6.8%) and EES (17.0%) on VOC07 and DPM (6.5%), CN (5.5%) and EES (16.2%) on VOC10. We show with an experiment that there are no constraints on the type of the detector. The proposed method outperforms (2.4%) the state-of-the-art object detector (RCNN) on VOC07 when Regions with Convolutional Neural Network is combined with other detectors used in this paper.
Double regions growing algorithm for automated satellite image mosaicking
NASA Astrophysics Data System (ADS)
Tan, Yihua; Chen, Chen; Tian, Jinwen
2011-12-01
Feathering is a most widely used method in seamless satellite image mosaicking. A simple but effective algorithm - double regions growing (DRG) algorithm, which utilizes the shape content of images' valid regions, is proposed for generating robust feathering-line before feathering. It works without any human intervention, and experiment on real satellite images shows the advantages of the proposed method.
On the evaluation of segmentation editing tools
Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.
2014-01-01
Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063
Palm Vein Verification Using Multiple Features and Locality Preserving Projections
Bu, Wei; Wu, Xiangqian; Zhao, Qiushi
2014-01-01
Biometrics is defined as identifying people by their physiological characteristic, such as iris pattern, fingerprint, and face, or by some aspects of their behavior, such as voice, signature, and gesture. Considerable attention has been drawn on these issues during the last several decades. And many biometric systems for commercial applications have been successfully developed. Recently, the vein pattern biometric becomes increasingly attractive for its uniqueness, stability, and noninvasiveness. A vein pattern is the physical distribution structure of the blood vessels underneath a person's skin. The palm vein pattern is very ganglion and it shows a huge number of vessels. The attitude of the palm vein vessels stays in the same location for the whole life and its pattern is definitely unique. In our work, the matching filter method is proposed for the palm vein image enhancement. New palm vein features extraction methods, global feature extracted based on wavelet coefficients and locality preserving projections (WLPP), and local feature based on local binary pattern variance and locality preserving projections (LBPV_LPP) have been proposed. Finally, the nearest neighbour matching method has been proposed that verified the test palm vein images. The experimental result shows that the EER to the proposed method is 0.1378%. PMID:24693230
Palm vein verification using multiple features and locality preserving projections.
Al-Juboori, Ali Mohsin; Bu, Wei; Wu, Xiangqian; Zhao, Qiushi
2014-01-01
Biometrics is defined as identifying people by their physiological characteristic, such as iris pattern, fingerprint, and face, or by some aspects of their behavior, such as voice, signature, and gesture. Considerable attention has been drawn on these issues during the last several decades. And many biometric systems for commercial applications have been successfully developed. Recently, the vein pattern biometric becomes increasingly attractive for its uniqueness, stability, and noninvasiveness. A vein pattern is the physical distribution structure of the blood vessels underneath a person's skin. The palm vein pattern is very ganglion and it shows a huge number of vessels. The attitude of the palm vein vessels stays in the same location for the whole life and its pattern is definitely unique. In our work, the matching filter method is proposed for the palm vein image enhancement. New palm vein features extraction methods, global feature extracted based on wavelet coefficients and locality preserving projections (WLPP), and local feature based on local binary pattern variance and locality preserving projections (LBPV_LPP) have been proposed. Finally, the nearest neighbour matching method has been proposed that verified the test palm vein images. The experimental result shows that the EER to the proposed method is 0.1378%.
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-04-14
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.
A literature-driven method to calculate similarities among diseases.
Kim, Hyunjin; Yoon, Youngmi; Ahn, Jaegyoon; Park, Sanghyun
2015-11-01
"Our lives are connected by a thousand invisible threads and along these sympathetic fibers, our actions run as causes and return to us as results". It is Herman Melville's famous quote describing connections among human lives. To paraphrase the Melville's quote, diseases are connected by many functional threads and along these sympathetic fibers, diseases run as causes and return as results. The Melville's quote explains the reason for researching disease-disease similarity and disease network. Measuring similarities between diseases and constructing disease network can play an important role in disease function research and in disease treatment. To estimate disease-disease similarities, we proposed a novel literature-based method. The proposed method extracted disease-gene relations and disease-drug relations from literature and used the frequencies of occurrence of the relations as features to calculate similarities among diseases. We also constructed disease network with top-ranking disease pairs from our method. The proposed method discovered a larger number of answer disease pairs than other comparable methods and showed the lowest p-value. We presume that our method showed good results because of using literature data, using all possible gene symbols and drug names for features of a disease, and determining feature values of diseases with the frequencies of co-occurrence of two entities. The disease-disease similarities from the proposed method can be used in computational biology researches which use similarities among diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-01-01
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028
NASA Astrophysics Data System (ADS)
Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun
2018-07-01
A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-04-03
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.
Multi-label spacecraft electrical signal classification method based on DBN and random forest
Li, Ke; Yu, Nan; Li, Pengfei; Song, Shimin; Wu, Yalei; Li, Yang; Liu, Meng
2017-01-01
In spacecraft electrical signal characteristic data, there exists a large amount of data with high-dimensional features, a high computational complexity degree, and a low rate of identification problems, which causes great difficulty in fault diagnosis of spacecraft electronic load systems. This paper proposes a feature extraction method that is based on deep belief networks (DBN) and a classification method that is based on the random forest (RF) algorithm; The proposed algorithm mainly employs a multi-layer neural network to reduce the dimension of the original data, and then, classification is applied. Firstly, we use the method of wavelet denoising, which was used to pre-process the data. Secondly, the deep belief network is used to reduce the feature dimension and improve the rate of classification for the electrical characteristics data. Finally, we used the random forest algorithm to classify the data and comparing it with other algorithms. The experimental results show that compared with other algorithms, the proposed method shows excellent performance in terms of accuracy, computational efficiency, and stability in addressing spacecraft electrical signal data. PMID:28486479
Multi-label spacecraft electrical signal classification method based on DBN and random forest.
Li, Ke; Yu, Nan; Li, Pengfei; Song, Shimin; Wu, Yalei; Li, Yang; Liu, Meng
2017-01-01
In spacecraft electrical signal characteristic data, there exists a large amount of data with high-dimensional features, a high computational complexity degree, and a low rate of identification problems, which causes great difficulty in fault diagnosis of spacecraft electronic load systems. This paper proposes a feature extraction method that is based on deep belief networks (DBN) and a classification method that is based on the random forest (RF) algorithm; The proposed algorithm mainly employs a multi-layer neural network to reduce the dimension of the original data, and then, classification is applied. Firstly, we use the method of wavelet denoising, which was used to pre-process the data. Secondly, the deep belief network is used to reduce the feature dimension and improve the rate of classification for the electrical characteristics data. Finally, we used the random forest algorithm to classify the data and comparing it with other algorithms. The experimental results show that compared with other algorithms, the proposed method shows excellent performance in terms of accuracy, computational efficiency, and stability in addressing spacecraft electrical signal data.
Trainable multiscript orientation detection
NASA Astrophysics Data System (ADS)
Van Beusekom, Joost; Rangoni, Yves; Breuel, Thomas M.
2010-01-01
Detecting the correct orientation of document images is an important step in large scale digitization processes, as most subsequent document analysis and optical character recognition methods assume upright position of the document page. Many methods have been proposed to solve the problem, most of which base on ascender to descender ratio computation. Unfortunately, this cannot be used for scripts having no descenders nor ascenders. Therefore, we present a trainable method using character similarity to compute the correct orientation. A connected component based distance measure is computed to compare the characters of the document image to characters whose orientation is known. This allows to detect the orientation for which the distance is lowest as the correct orientation. Training is easily achieved by exchanging the reference characters by characters of the script to be analyzed. Evaluation of the proposed approach showed accuracy of above 99% for Latin and Japanese script from the public UW-III and UW-II datasets. An accuracy of 98.9% was obtained for Fraktur on a non-public dataset. Comparison of the proposed method to two methods using ascender / descender ratio based orientation detection shows a significant improvement.
Hair and bare skin discrimination for laser-assisted hair removal systems.
Cayir, Sercan; Yetik, Imam Samil
2017-07-01
Laser-assisted hair removal devices aim to remove body hair permanently. In most cases, these devices irradiate the whole area of the skin with a homogenous power density. Thus, a significant portion of the skin, where hair is not present, is burnt unnecessarily causing health risks. Therefore, methods that can distinguish hair regions automatically would be very helpful avoiding these unnecessary applications of laser. This study proposes a new system of algorithms to detect hair regions with the help of a digital camera. Unlike previous limited number of studies, our methods are very fast allowing for real-time application. Proposed methods are based on certain features derived from histograms of hair and skin regions. We compare our algorithm with competing methods in terms of localization performance and computation time and show that a much faster real-time accurate localization of hair regions is possible with the proposed method. Our results show that the algorithm we have developed is extremely fast (around 45 milliseconds) allowing for real-time application with high accuracy hair localization ( 96.48 %).
Graphic report of the results from propensity score method analyses.
Shrier, Ian; Pang, Menglan; Platt, Robert W
2017-08-01
To increase transparency in studies reporting propensity scores by using graphical methods that clearly illustrate (1) the number of participant exclusions that occur as a consequence of the analytic strategy and (2) whether treatment effects are constant or heterogeneous across propensity scores. We applied graphical methods to a real-world pharmacoepidemiologic study that evaluated the effect of initiating statin medication on the 1-year all-cause mortality post-myocardial infarction. We propose graphical methods to show the consequences of trimming and matching on the exclusion of participants from the analysis. We also propose the use of meta-analytical forest plots to show the magnitude of effect heterogeneity. A density plot with vertical lines demonstrated the proportion of subjects excluded because of trimming. A frequency plot with horizontal lines demonstrated the proportion of subjects excluded because of matching. An augmented forest plot illustrates the amount of effect heterogeneity present in the data. Our proposed techniques present additional and useful information that helps readers understand the sample that is analyzed with propensity score methods and whether effect heterogeneity is present. Copyright © 2017 Elsevier Inc. All rights reserved.
Feature Vector Construction Method for IRIS Recognition
NASA Astrophysics Data System (ADS)
Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.
2017-05-01
One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.
Morphology filter bank for extracting nodular and linear patterns in medical images.
Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki
2017-04-01
Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.
NASA Astrophysics Data System (ADS)
Seo, Junyeong; Sung, Youngchul
2018-06-01
In this paper, an efficient transmit beam design and user scheduling method is proposed for multi-user (MU) multiple-input single-output (MISO) non-orthogonal multiple access (NOMA) downlink, based on Pareto-optimality. The proposed beam design and user scheduling method groups simultaneously-served users into multiple clusters with practical two users in each cluster, and then applies spatical zeroforcing (ZF) across clusters to control inter-cluster interference (ICI) and Pareto-optimal beam design with successive interference cancellation (SIC) to two users in each cluster to remove interference to strong users and leverage signal-to-interference-plus-noise ratios (SINRs) of interference-experiencing weak users. The proposed method has flexibility to control the rates of strong and weak users and numerical results show that the proposed method yields good performance.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Chin-Yao; Zhang, Wei
This paper presents a new distributed control framework to coordinate inverter-interfaced distributed energy resources (DERs) in island microgrids. We show that under bounded load uncertainties, the proposed control method can steer the microgrid to a desired steady state with synchronized inverter frequency across the network and proportional sharing of both active and reactive powers among the inverters. We also show that such convergence can be achieved while respecting constraints on voltage magnitude and branch angle differences. The controller is robust under various contingency scenarios, including loss of communication links and failures of DERs. The proposed controller is applicable to lossymore » mesh microgrids with heterogeneous R/X distribution lines and reasonable parameter variations. Simulations based on various microgrid operation scenarios are also provided to show the effectiveness of the proposed control method.« less
Efficient Regressions via Optimally Combining Quantile Information*
Zhao, Zhibiao; Xiao, Zhijie
2014-01-01
We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481
IMPROVEMENT OF EFFICIENCY OF CUT AND OVERLAY ASPHALT WORKS BY USING MOBILE MAPPING SYSTEM
NASA Astrophysics Data System (ADS)
Yabuki, Nobuyoshi; Nakaniwa, Kazuhide; Kidera, Hiroki; Nishi, Daisuke
When the cut-and-overlay asphalt work is done for improving road pavement, conventional road surface elevation survey with levels often requires traffic regulation and takes much time and effort. Recently, although new surveying methods using non-prismatic total stations or fixed 3D laser scanners have been proposed in industry, they have not been adopted much due to their high cost. In this research, we propose a new method using Mobile Mapping Systems (MMS) in order to increase the efficiency and to reduce the cost. In this method, small white marks are painted at the intervals of 10m along the road to identify cross sections and to modify the elevations of the white marks with accurate survey data. To verify this proposed method, we executed an experiment and compared this method with the conventional level survey method and the fixed 3D laser scanning method at a road of Osaka University. The result showed that the proposed method had a similar accuracy with other methods and it was more efficient.
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
Comparison of different screening methods for blood pressure disorders in children and adolescents.
Mourato, Felipe Alves; Lima Filho, José Luiz; Mattos, Sandra da Silva
2015-01-01
To compare different methods of screening for blood pressure disorders in children and adolescents. A database with 17,083 medical records of patients from a pediatric cardiology clinic was used. After analyzing the inclusion and exclusion criteria, 5,650 were selected. These were divided into two age groups: between 5 and 13 years and between 13 and 18 years. The blood pressure measurement was classified as normal, pre-hypertensive, or hypertensive, consistent with recent guidelines and the selected screening methods. Sensitivity, specificity, and accuracy were then calculated according to gender and age range. The formulas proposed by Somu and Ardissino's table showed low sensitivity in identifying pre-hypertension in all age groups, whereas the table proposed by Kaelber showed the best results. The ratio between blood pressure and height showed low specificity in the younger age group, but showed good performance in adolescents. Screening tools used for the assessment of blood pressure disorders in children and adolescents may be useful to decrease the current rate of underdiagnosis of this condition. The table proposed by Kaelber showed the best results; however, the ratio between BP and height demonstrated specific advantages, as it does not require tables. Copyright © 2014 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
NASA Astrophysics Data System (ADS)
Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi
On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.
Multi-views Fusion CNN for Left Ventricular Volumes Estimation on Cardiac MR Images.
Luo, Gongning; Dong, Suyu; Wang, Kuanquan; Zuo, Wangmeng; Cao, Shaodong; Zhang, Henggui
2017-10-13
Left ventricular (LV) volumes estimation is a critical procedure for cardiac disease diagnosis. The objective of this paper is to address direct LV volumes prediction task. In this paper, we propose a direct volumes prediction method based on the end-to-end deep convolutional neural networks (CNN). We study the end-to-end LV volumes prediction method in items of the data preprocessing, networks structure, and multi-views fusion strategy. The main contributions of this paper are the following aspects. First, we propose a new data preprocessing method on cardiac magnetic resonance (CMR). Second, we propose a new networks structure for end-to-end LV volumes estimation. Third, we explore the representational capacity of different slices, and propose a fusion strategy to improve the prediction accuracy. The evaluation results show that the proposed method outperforms other state-of-the-art LV volumes estimation methods on the open accessible benchmark datasets. The clinical indexes derived from the predicted volumes agree well with the ground truth (EDV: R=0.974, RMSE=9.6ml; ESV: R=0.976, RMSE=7.1ml; EF: R=0.828, RMSE =4.71%). Experimental results prove that the proposed method has high accuracy and efficiency on LV volumes prediction task. The proposed method not only has application potential for cardiac diseases screening for large-scale CMR data, but also can be extended to other medical image research fields.
A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress
2018-01-01
The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399
A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.
Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He
2018-01-01
The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.
A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search
NASA Astrophysics Data System (ADS)
Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd
2017-08-01
Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.
Real-Time GNSS-Based Attitude Determination in the Measurement Domain
Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun
2017-01-01
A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance. PMID:28165434
Prostate multimodality image registration based on B-splines and quadrature local energy.
Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice
2012-05-01
Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.
Kang, Jinbum; Lee, Jae Young; Yoo, Yangmo
2016-06-01
Effective speckle reduction in ultrasound B-mode imaging is important for enhancing the image quality and improving the accuracy in image analysis and interpretation. In this paper, a new feature-enhanced speckle reduction (FESR) method based on multiscale analysis and feature enhancement filtering is proposed for ultrasound B-mode imaging. In FESR, clinical features (e.g., boundaries and borders of lesions) are selectively emphasized by edge, coherence, and contrast enhancement filtering from fine to coarse scales while simultaneously suppressing speckle development via robust diffusion filtering. In the simulation study, the proposed FESR method showed statistically significant improvements in edge preservation, mean structure similarity, speckle signal-to-noise ratio, and contrast-to-noise ratio (CNR) compared with other speckle reduction methods, e.g., oriented speckle reducing anisotropic diffusion (OSRAD), nonlinear multiscale wavelet diffusion (NMWD), the Laplacian pyramid-based nonlinear diffusion and shock filter (LPNDSF), and the Bayesian nonlocal means filter (OBNLM). Similarly, the FESR method outperformed the OSRAD, NMWD, LPNDSF, and OBNLM methods in terms of CNR, i.e., 10.70 ± 0.06 versus 9.00 ± 0.06, 9.78 ± 0.06, 8.67 ± 0.04, and 9.22 ± 0.06 in the phantom study, respectively. Reconstructed B-mode images that were developed using the five speckle reduction methods were reviewed by three radiologists for evaluation based on each radiologist's diagnostic preferences. All three radiologists showed a significant preference for the abdominal liver images obtained using the FESR methods in terms of conspicuity, margin sharpness, artificiality, and contrast, p<0.0001. For the kidney and thyroid images, the FESR method showed similar improvement over other methods. However, the FESR method did not show statistically significant improvement compared with the OBNLM method in margin sharpness for the kidney and thyroid images. These results demonstrate that the proposed FESR method can improve the image quality of ultrasound B-mode imaging by enhancing the visualization of lesion features while effectively suppressing speckle noise.
CuBe: parametric modeling of 3D foveal shape using cubic Bézier
Yadav, Sunil Kumar; Motamedi, Seyedamirhosein; Oberwahrenbrock, Timm; Oertel, Frederike Cosima; Polthier, Konrad; Paul, Friedemann; Kadas, Ella Maria; Brandt, Alexander U.
2017-01-01
Optical coherence tomography (OCT) allows three-dimensional (3D) imaging of the retina, and is commonly used for assessing pathological changes of fovea and macula in many diseases. Many neuroinflammatory conditions are known to cause modifications to the fovea shape. In this paper, we propose a method for parametric modeling of the foveal shape. Our method exploits invariant features of the macula from OCT data and applies a cubic Bézier polynomial along with a least square optimization to produce a best fit parametric model of the fovea. Additionally, we provide several parameters of the foveal shape based on the proposed 3D parametric modeling. Our quantitative and visual results show that the proposed model is not only able to reconstruct important features from the foveal shape, but also produces less error compared to the state-of-the-art methods. Finally, we apply the model in a comparison of healthy control eyes and eyes from patients with neuroinflammatory central nervous system disorders and optic neuritis, and show that several derived model parameters show significant differences between the two groups. PMID:28966857
Adaptive target binarization method based on a dual-camera system
NASA Astrophysics Data System (ADS)
Lei, Jing; Zhang, Ping; Xu, Jiangtao; Gao, Zhiyuan; Gao, Jing
2018-01-01
An adaptive target binarization method based on a dual-camera system that contains two dynamic vision sensors was proposed. First, a preprocessing procedure of denoising is introduced to remove the noise events generated by the sensors. Then, the complete edge of the target is retrieved and represented by events based on an event mosaicking method. Third, the region of the target is confirmed by an event-to-event method. Finally, a postprocessing procedure of image open and close operations of morphology methods is adopted to remove the artifacts caused by event-to-event mismatching. The proposed binarization method has been extensively tested on numerous degraded images with nonuniform illumination, low contrast, noise, or light spots and successfully compared with other well-known binarization methods. The experimental results, which are based on visual and misclassification error criteria, show that the proposed method performs well and has better robustness on the binarization of degraded images.
Multiview road sign detection via self-adaptive color model and shape context matching
NASA Astrophysics Data System (ADS)
Liu, Chunsheng; Chang, Faliang; Liu, Chengyun
2016-09-01
The multiview appearance of road signs in uncontrolled environments has made the detection of road signs a challenging problem in computer vision. We propose a road sign detection method to detect multiview road signs. This method is based on several algorithms, including the classical cascaded detector, the self-adaptive weighted Gaussian color model (SW-Gaussian model), and a shape context matching method. The classical cascaded detector is used to detect the frontal road signs in video sequences and obtain the parameters for the SW-Gaussian model. The proposed SW-Gaussian model combines the two-dimensional Gaussian model and the normalized red channel together, which can largely enhance the contrast between the red signs and background. The proposed shape context matching method can match shapes with big noise, which is utilized to detect road signs in different directions. The experimental results show that compared with previous detection methods, the proposed multiview detection method can reach higher detection rate in detecting signs with different directions.
Multivariate Time Series Decomposition into Oscillation Components.
Matsuda, Takeru; Komaki, Fumiyasu
2017-08-01
Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
A novel load balanced energy conservation approach in WSN using biogeography based optimization
NASA Astrophysics Data System (ADS)
Kaushik, Ajay; Indu, S.; Gupta, Daya
2017-09-01
Clustering sensor nodes is an effective technique to reduce energy consumption of the sensor nodes and maximize the lifetime of Wireless sensor networks. Balancing load of the cluster head is an important factor in long run operation of WSNs. In this paper we propose a novel load balancing approach using biogeography based optimization (LB-BBO). LB-BBO uses two separate fitness functions to perform load balancing of equal and unequal load respectively. The proposed method is simulated using matlab and compared with existing methods. The proposed method shows better performance than all the previous works implemented for energy conservation in WSN
All-fiber magnetic field sensor based on tapered thin-core fiber and magnetic fluid.
Zhang, Junying; Qiao, Xueguang; Yang, Hangzhou; Wang, Ruohui; Rong, Qiangzhou; Lim, Kok-Sing; Ahmad, Harith
2017-01-10
A method for the measurement of a magnetic field by combining a tapered thin-core fiber (TTCF) and magnetic fluid is proposed and experimentally demonstrated. The modal interference effect is caused by the core mode and excited eigenmodes in the TTCF cladding. The transmission spectra of the proposed sensor are measured and theoretically analyzed at different magnetic field strengths. The results field show that the magnetic sensitivity reaches up to -0.1039 dB/Oe in the range of 40-1600 e. The proposed method possesses high sensitivity and low cost compared with other expensive methods.
Random phase encoding for optical security
NASA Astrophysics Data System (ADS)
Wang, RuiKang K.; Watson, Ian A.; Chatwin, Christopher R.
1996-09-01
A new optical encoding method for security applications is proposed. The encoded image (encrypted into the security products) is merely a random phase image statistically and randomly generated by a random number generator using a computer, which contains no information from the reference pattern (stored for verification) or the frequency plane filter (a phase-only function for decoding). The phase function in the frequency plane is obtained using a modified phase retrieval algorithm. The proposed method uses two phase-only functions (images) at both the input and frequency planes of the optical processor leading to maximum optical efficiency. Computer simulation shows that the proposed method is robust for optical security applications.
Mean-Reverting Portfolio With Budget Constraint
NASA Astrophysics Data System (ADS)
Zhao, Ziping; Palomar, Daniel P.
2018-05-01
This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.
Jia, Feng; Lei, Yaguo; Shan, Hongkai; Lin, Jing
2015-01-01
The early fault characteristics of rolling element bearings carried by vibration signals are quite weak because the signals are generally masked by heavy background noise. To extract the weak fault characteristics of bearings from the signals, an improved spectral kurtosis (SK) method is proposed based on maximum correlated kurtosis deconvolution (MCKD). The proposed method combines the ability of MCKD in indicating the periodic fault transients and the ability of SK in locating these transients in the frequency domain. A simulation signal overwhelmed by heavy noise is used to demonstrate the effectiveness of the proposed method. The results show that MCKD is beneficial to clarify the periodic impulse components of the bearing signals, and the method is able to detect the resonant frequency band of the signal and extract its fault characteristic frequency. Through analyzing actual vibration signals collected from wind turbines and hot strip rolling mills, we confirm that by using the proposed method, it is possible to extract fault characteristics and diagnose early faults of rolling element bearings. Based on the comparisons with the SK method, it is verified that the proposed method is more suitable to diagnose early faults of rolling element bearings. PMID:26610501
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-24
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.
Sanada, Akira; Tanaka, Nobuo
2012-08-01
This study deals with the feedforward active control of sound transmission through a simply supported rectangular panel using vibration actuators. The control effect largely depends on the excitation method, including the number and locations of actuators. In order to obtain a large control effect at low frequencies over a wide frequency, an active transmission control method based on single structural mode actuation is proposed. Then, with the goal of examining the feasibility of the proposed method, the (1, 3) mode is selected as the target mode and a modal actuation method in combination with six point force actuators is considered. Assuming that a single input single output feedforward control is used, sound transmission in the case minimizing the transmitted sound power is calculated for some actuation methods. Simulation results showed that the (1, 3) modal actuation is globally effective at reducing the sound transmission by more than 10 dB in the low-frequency range for both normal and oblique incidences. Finally, experimental results also showed that a large reduction could be achieved in the low-frequency range, which proves the validity and feasibility of the proposed method.
A BHR Composite Network-Based Visualization Method for Deformation Risk Level of Underground Space
Zheng, Wei; Zhang, Xiaoya; Lu, Qi
2015-01-01
This study proposes a visualization processing method for the deformation risk level of underground space. The proposed method is based on a BP-Hopfield-RGB (BHR) composite network. Complex environmental factors are integrated in the BP neural network. Dynamic monitoring data are then automatically classified in the Hopfield network. The deformation risk level is combined with the RGB color space model and is displayed visually in real time, after which experiments are conducted with the use of an ultrasonic omnidirectional sensor device for structural deformation monitoring. The proposed method is also compared with some typical methods using a benchmark dataset. Results show that the BHR composite network visualizes the deformation monitoring process in real time and can dynamically indicate dangerous zones. PMID:26011618
CERES: A new cerebellum lobule segmentation method.
Romero, Jose E; Coupé, Pierrick; Giraud, Rémi; Ta, Vinh-Thong; Fonov, Vladimir; Park, Min Tae M; Chakravarty, M Mallar; Voineskos, Aristotle N; Manjón, Jose V
2017-02-15
The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes). Copyright © 2016 Elsevier Inc. All rights reserved.
Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L
2016-08-01
Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.
Maximizing Total QoS-Provisioning of Image Streams with Limited Energy Budget
NASA Astrophysics Data System (ADS)
Lee, Wan Yeon; Kim, Kyong Hoon; Ko, Young Woong
To fully utilize the limited battery energy of mobile electronic devices, we propose an adaptive adjustment method of processing quality for multiple image stream tasks running with widely varying execution times. This adjustment method completes the worst-case executions of the tasks with a given budget of energy, and maximizes the total reward value of processing quality obtained during their executions by exploiting the probability distribution of task execution times. The proposed method derives the maximum reward value for the tasks being executable with arbitrary processing quality, and near maximum value for the tasks being executable with a finite number of processing qualities. Our evaluation on a prototype system shows that the proposed method achieves larger reward values, by up to 57%, than the previous method.
A VIKOR Technique with Applications Based on DEMATEL and ANP
NASA Astrophysics Data System (ADS)
Ou Yang, Yu-Ping; Shieh, How-Ming; Tzeng, Gwo-Hshiung
In multiple criteria decision making (MCDM) methods, the compromise ranking method (named VIKOR) was introduced as one applicable technique to implement within MCDM. It was developed for multicriteria optimization of complex systems. However, few papers discuss conflicting (competing) criteria with dependence and feedback in the compromise solution method. Therefore, this study proposes and provides applications for a novel model using the VIKOR technique based on DEMATEL and the ANP to solve the problem of conflicting criteria with dependence and feedback. In addition, this research also uses DEMATEL to normalize the unweighted supermatrix of the ANP to suit the real world. An example is also presented to illustrate the proposed method with applications thereof. The results show the proposed method is suitable and effective in real-world applications.
Rolling bearing fault diagnosis based on information fusion using Dempster-Shafer evidence theory
NASA Astrophysics Data System (ADS)
Pei, Di; Yue, Jianhai; Jiao, Jing
2017-10-01
This paper presents a fault diagnosis method for rolling bearing based on information fusion. Acceleration sensors are arranged at different position to get bearing vibration data as diagnostic evidence. The Dempster-Shafer (D-S) evidence theory is used to fuse multi-sensor data to improve diagnostic accuracy. The efficiency of the proposed method is demonstrated by the high speed train transmission test bench. The results of experiment show that the proposed method in this paper improves the rolling bearing fault diagnosis accuracy compared with traditional signal analysis methods.
Efficient discovery of risk patterns in medical data.
Li, Jiuyong; Fu, Ada Wai-chee; Fahey, Paul
2009-01-01
This paper studies a problem of efficiently discovering risk patterns in medical data. Risk patterns are defined by a statistical metric, relative risk, which has been widely used in epidemiological research. To avoid fruitless search in the complete exploration of risk patterns, we define optimal risk pattern set to exclude superfluous patterns, i.e. complicated patterns with lower relative risk than their corresponding simpler form patterns. We prove that mining optimal risk pattern sets conforms an anti-monotone property that supports an efficient mining algorithm. We propose an efficient algorithm for mining optimal risk pattern sets based on this property. We also propose a hierarchical structure to present discovered patterns for the easy perusal by domain experts. The proposed approach is compared with two well-known rule discovery methods, decision tree and association rule mining approaches on benchmark data sets and applied to a real world application. The proposed method discovers more and better quality risk patterns than a decision tree approach. The decision tree method is not designed for such applications and is inadequate for pattern exploring. The proposed method does not discover a large number of uninteresting superfluous patterns as an association mining approach does. The proposed method is more efficient than an association rule mining method. A real world case study shows that the method reveals some interesting risk patterns to medical practitioners. The proposed method is an efficient approach to explore risk patterns. It quickly identifies cohorts of patients that are vulnerable to a risk outcome from a large data set. The proposed method is useful for exploratory study on large medical data to generate and refine hypotheses. The method is also useful for designing medical surveillance systems.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Afzali, Maryam; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid
2015-09-30
Diffusion weighted imaging (DWI) is a non-invasive method for investigating the brain white matter structure and can be used to evaluate fiber bundles. However, due to practical constraints, DWI data acquired in clinics are low resolution. This paper proposes a method for interpolation of orientation distribution functions (ODFs). To this end, fuzzy clustering is applied to segment ODFs based on the principal diffusion directions (PDDs). Next, a cluster is modeled by a tensor so that an ODF is represented by a mixture of tensors. For interpolation, each tensor is rotated separately. The method is applied on the synthetic and real DWI data of control and epileptic subjects. Both experiments illustrate capability of the method in increasing spatial resolution of the data in the ODF field properly. The real dataset show that the method is capable of reliable identification of differences between temporal lobe epilepsy (TLE) patients and normal subjects. The method is compared to existing methods. Comparison studies show that the proposed method generates smaller angular errors relative to the existing methods. Another advantage of the method is that it does not require an iterative algorithm to find the tensors. The proposed method is appropriate for increasing resolution in the ODF field and can be applied to clinical data to improve evaluation of white matter fibers in the brain. Copyright © 2015 Elsevier B.V. All rights reserved.
Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations
NASA Astrophysics Data System (ADS)
Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati
2016-10-01
The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.
A Robust Cooperated Control Method with Reinforcement Learning and Adaptive H∞ Control
NASA Astrophysics Data System (ADS)
Obayashi, Masanao; Uchiyama, Shogo; Kuremoto, Takashi; Kobayashi, Kunikazu
This study proposes a robust cooperated control method combining reinforcement learning with robust control to control the system. A remarkable characteristic of the reinforcement learning is that it doesn't require model formula, however, it doesn't guarantee the stability of the system. On the other hand, robust control system guarantees stability and robustness, however, it requires model formula. We employ both the actor-critic method which is a kind of reinforcement learning with minimal amount of computation to control continuous valued actions and the traditional robust control, that is, H∞ control. The proposed system was compared method with the conventional control method, that is, the actor-critic only used, through the computer simulation of controlling the angle and the position of a crane system, and the simulation result showed the effectiveness of the proposed method.
Phase unwrapping using region-based markov random field model.
Dong, Ying; Ji, Jim
2010-01-01
Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.
2010-02-28
Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper develops a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detecting ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliablymore » and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis. In addition, the proposed method is applied to field measurement data from WECC to show the performance of the proposed algorithm.« less
Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing
2016-12-20
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
RF Pulse Design using Nonlinear Gradient Magnetic Fields
Kopanoglu, Emre; Constable, R. Todd
2014-01-01
Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286
Removal of BCG artefact from concurrent fMRI-EEG recordings based on EMD and PCA.
Javed, Ehtasham; Faye, Ibrahima; Malik, Aamir Saeed; Abdullah, Jafri Malin
2017-11-01
Simultaneous electroencephalography (EEG) and functional magnetic resonance image (fMRI) acquisitions provide better insight into brain dynamics. Some artefacts due to simultaneous acquisition pose a threat to the quality of the data. One such problematic artefact is the ballistocardiogram (BCG) artefact. We developed a hybrid algorithm that combines features of empirical mode decomposition (EMD) with principal component analysis (PCA) to reduce the BCG artefact. The algorithm does not require extra electrocardiogram (ECG) or electrooculogram (EOG) recordings to extract the BCG artefact. The method was tested with both simulated and real EEG data of 11 participants. From the simulated data, the similarity index between the extracted BCG and the simulated BCG showed the effectiveness of the proposed method in BCG removal. On the other hand, real data were recorded with two conditions, i.e. resting state (eyes closed dataset) and task influenced (event-related potentials (ERPs) dataset). Using qualitative (visual inspection) and quantitative (similarity index, improved normalized power spectrum (INPS) ratio, power spectrum, sample entropy (SE)) evaluation parameters, the assessment results showed that the proposed method can efficiently reduce the BCG artefact while preserving the neuronal signals. Compared with conventional methods, namely, average artefact subtraction (AAS), optimal basis set (OBS) and combined independent component analysis and principal component analysis (ICA-PCA), the statistical analyses of the results showed that the proposed method has better performance, and the differences were significant for all quantitative parameters except for the power and sample entropy. The proposed method does not require any reference signal, prior information or assumption to extract the BCG artefact. It will be very useful in circumstances where the reference signal is not available. Copyright © 2017 Elsevier B.V. All rights reserved.
Objective assessment in digital images of skin erythema caused by radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsubara, H., E-mail: matubara@nirs.go.jp; Matsufuji, N.; Tsuji, H.
Purpose: Skin toxicity caused by radiotherapy has been visually classified into discrete grades. The present study proposes an objective and continuous assessment method of skin erythema in digital images taken under arbitrary lighting conditions, which is the case for most clinical environments. The purpose of this paper is to show the feasibility of the proposed method. Methods: Clinical data were gathered from six patients who received carbon beam therapy for lung cancer. Skin condition was recorded using an ordinary compact digital camera under unfixed lighting conditions; a laser Doppler flowmeter was used to measure blood flow in the skin. Themore » photos and measurements were taken at 3 h, 30, and 90 days after irradiation. Images were decomposed into hemoglobin and melanin colors using independent component analysis. Pixel values in hemoglobin color images were compared with skin dose and skin blood flow. The uncertainty of the practical photographic method was also studied in nonclinical experiments. Results: The clinical data showed good linearity between skin dose, skin blood flow, and pixel value in the hemoglobin color images; their correlation coefficients were larger than 0.7. It was deduced from the nonclinical that the uncertainty due to the proposed method with photography was 15%; such an uncertainty was not critical for assessment of skin erythema in practical use. Conclusions: Feasibility of the proposed method for assessment of skin erythema using digital images was demonstrated. The numerical relationship obtained helped to predict skin erythema by artificial processing of skin images. Although the proposed method using photographs taken under unfixed lighting conditions increased the uncertainty of skin information in the images, it was shown to be powerful for the assessment of skin conditions because of its flexibility and adaptability.« less
Monte Carlo sampling in diffusive dynamical systems
NASA Astrophysics Data System (ADS)
Tapias, Diego; Sanders, David P.; Altmann, Eduardo G.
2018-05-01
We introduce a Monte Carlo algorithm to efficiently compute transport properties of chaotic dynamical systems. Our method exploits the importance sampling technique that favors trajectories in the tail of the distribution of displacements, where deviations from a diffusive process are most prominent. We search for initial conditions using a proposal that correlates states in the Markov chain constructed via a Metropolis-Hastings algorithm. We show that our method outperforms the direct sampling method and also Metropolis-Hastings methods with alternative proposals. We test our general method through numerical simulations in 1D (box-map) and 2D (Lorentz gas) systems.
Fast image interpolation via random forests.
Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
2015-10-01
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
NASA Astrophysics Data System (ADS)
Miyajo, Akira; Hasegawa, Hideyuki
2018-07-01
At present, the speckle tracking method is widely used as a two- or three-dimensional (2D or 3D) motion estimator for the measurement of cardiovascular dynamics. However, this method requires high-level interpolation of a function, which evaluates the similarity between ultrasonic echo signals in two frames, to estimate a subsample small displacement in high-frame-rate ultrasound, which results in a high computational cost. To overcome this problem, a 2D motion estimator using the 2D Fourier transform, which does not require any interpolation process, was proposed by our group. In this study, we compared the accuracies of the speckle tracking method and our method using a 2D motion estimator, and applied the proposed method to the measurement of motion of a human carotid arterial wall. The bias error and standard deviation in the lateral velocity estimates obtained by the proposed method were 0.048 and 0.282 mm/s, respectively, which were significantly better than those (‑0.366 and 1.169 mm/s) obtained by the speckle tracking method. The calculation time of the proposed phase-sensitive method was 97% shorter than the speckle tracking method. Furthermore, the in vivo experimental results showed that a characteristic change in velocity around the carotid bifurcation could be detected by the proposed method.
NASA Astrophysics Data System (ADS)
Yang, Run-Qiu; Niu, Chao; Zhang, Cheng-Yong; Kim, Keun-Young
2018-02-01
We compute the time-dependent complexity of the thermofield double states by four different proposals: two holographic proposals based on the "complexity-action" (CA) conjecture and "complexity-volume" (CV) conjecture, and two quantum field theoretic proposals based on the Fubini-Study metric (FS) and Finsler geometry (FG). We find that four different proposals yield both similarities and differences, which will be useful to deepen our understanding on the complexity and sharpen its definition. In particular, at early time the complexity linearly increase in the CV and FG proposals, linearly decreases in the FS proposal, and does not change in the CA proposal. In the late time limit, the CA, CV and FG proposals all show that the growth rate is 2 E/(πℏ) saturating the Lloyd's bound, while the FS proposal shows the growth rate is zero. It seems that the holographic CV conjecture and the field theoretic FG method are more correlated.
A method to reproduce alpha-particle spectra measured with semiconductor detectors.
Timón, A Fernández; Vargas, M Jurado; Sánchez, A Martín
2010-01-01
A method is proposed to reproduce alpha-particle spectra measured with silicon detectors, combining analytical and computer simulation techniques. The procedure includes the use of the Monte Carlo method to simulate the tracks of alpha-particles within the source and in the detector entrance window. The alpha-particle spectrum is finally obtained by the convolution of this simulated distribution and the theoretical distributions representing the contributions of the alpha-particle spectrometer to the spectrum. Experimental spectra from (233)U and (241)Am sources were compared with the predictions given by the proposed procedure, showing good agreement. The proposed method can be an important aid for the analysis and deconvolution of complex alpha-particle spectra. Copyright 2009 Elsevier Ltd. All rights reserved.
Enhanced facial texture illumination normalization for face recognition.
Luo, Yong; Guan, Ye-Peng
2015-08-01
An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.
Direct Importance Estimation with Gaussian Mixture Models
NASA Astrophysics Data System (ADS)
Yamada, Makoto; Sugiyama, Masashi
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
On the use of interaction error potentials for adaptive brain computer interfaces.
Llera, A; van Gerven, M A J; Gómez, V; Jensen, O; Kappen, H J
2011-12-01
We propose an adaptive classification method for the Brain Computer Interfaces (BCI) which uses Interaction Error Potentials (IErrPs) as a reinforcement signal and adapts the classifier parameters when an error is detected. We analyze the quality of the proposed approach in relation to the misclassification of the IErrPs. In addition we compare static versus adaptive classification performance using artificial and MEG data. We show that the proposed adaptive framework significantly improves the static classification methods. Copyright © 2011 Elsevier Ltd. All rights reserved.
A novel recurrent neural network with finite-time convergence for linear programming.
Liu, Qingshan; Cao, Jinde; Chen, Guanrong
2010-11-01
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
An alternative empirical likelihood method in missing response problems and causal inference.
Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao
2016-11-30
Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Robust finger vein ROI localization based on flexible segmentation.
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-10-24
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.
Robust Finger Vein ROI Localization Based on Flexible Segmentation
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-01-01
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769
Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No
2015-11-01
One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.
A new frequency approach for light flicker evaluation in electric power systems
NASA Astrophysics Data System (ADS)
Feola, Luigi; Langella, Roberto; Testa, Alfredo
2015-12-01
In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.
Cluster Correspondence Analysis.
van de Velden, M; D'Enza, A Iodice; Palumbo, F
2017-03-01
A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
Crowd motion segmentation and behavior recognition fusing streak flow and collectiveness
NASA Astrophysics Data System (ADS)
Gao, Mingliang; Jiang, Jun; Shen, Jin; Zou, Guofeng; Fu, Guixia
2018-04-01
Crowd motion segmentation and crowd behavior recognition are two hot issues in computer vision. A number of methods have been proposed to tackle these two problems. Among the methods, flow dynamics is utilized to model the crowd motion, with little consideration of collective property. Moreover, the traditional crowd behavior recognition methods treat the local feature and dynamic feature separately and overlook the interconnection of topological and dynamical heterogeneity in complex crowd processes. A crowd motion segmentation method and a crowd behavior recognition method are proposed based on streak flow and crowd collectiveness. The streak flow is adopted to reveal the dynamical property of crowd motion, and the collectiveness is incorporated to reveal the structure property. Experimental results show that the proposed methods improve the crowd motion segmentation accuracy and the crowd recognition rates compared with the state-of-the-art methods.
Infrared dim small target segmentation method based on ALI-PCNN model
NASA Astrophysics Data System (ADS)
Zhao, Shangnan; Song, Yong; Zhao, Yufei; Li, Yun; Li, Xu; Jiang, Yurong; Li, Lin
2017-10-01
Pulse Coupled Neural Network (PCNN) is improved by Adaptive Lateral Inhibition (ALI), while a method of infrared (IR) dim small target segmentation based on ALI-PCNN model is proposed in this paper. Firstly, the feeding input signal is modulated by lateral inhibition network to suppress background. Then, the linking input is modulated by ALI, and linking weight matrix is generated adaptively by calculating ALI coefficient of each pixel. Finally, the binary image is generated through the nonlinear modulation and the pulse generator in PCNN. The experimental results show that the segmentation effect as well as the values of contrast across region and uniformity across region of the proposed method are better than the OTSU method, maximum entropy method, the methods based on conventional PCNN and visual attention, and the proposed method has excellent performance in extracting IR dim small target from complex background.
Yang, Licai; Shen, Jun; Bao, Shudi; Wei, Shoushui
2013-10-01
To treat the problem of identification performance and the complexity of the algorithm, we proposed a piecewise linear representation and dynamic time warping (PLR-DTW) method for ECG biometric identification. Firstly we detected R peaks to get the heartbeats after denoising preprocessing. Then we used the PLR method to keep important information of an ECG signal segment while reducing the data dimension at the same time. The improved DTW method was used for similarity measurements between the test data and the templates. The performance evaluation was carried out on the two ECG databases: PTB and MIT-BIH. The analystic results showed that compared to the discrete wavelet transform method, the proposed PLR-DTW method achieved a higher accuracy rate which is nearly 8% of rising, and saved about 30% operation time, and this demonstrated that the proposed method could provide a better performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c
This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.
The role of interest and inflation rates in life-cycle cost analysis
NASA Technical Reports Server (NTRS)
Eisenberger, I.; Remer, D. S.; Lorden, G.
1978-01-01
The effect of projected interest and inflation rates on life cycle cost calculations is discussed and a method is proposed for making such calculations which replaces these rates by a single parameter. Besides simplifying the analysis, the method clarifies the roles of these rates. An analysis of historical interest and inflation rates from 1950 to 1976 shows that the proposed method can be expected to yield very good projections of life cycle cost even if the rates themselves fluctuate considerably.
Robust Mediation Analysis Based on Median Regression
Yuan, Ying; MacKinnon, David P.
2014-01-01
Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-01-01
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Breast Cancer Recognition Using a Novel Hybrid Intelligent Method
Addeh, Jalil; Ebrahimzadeh, Ata
2012-01-01
Breast cancer is the second largest cause of cancer deaths among women. At the same time, it is also among the most curable cancer types if it can be diagnosed early. This paper presents a novel hybrid intelligent method for recognition of breast cancer tumors. The proposed method includes three main modules: the feature extraction module, the classifier module, and the optimization module. In the feature extraction module, fuzzy features are proposed as the efficient characteristic of the patterns. In the classifier module, because of the promising generalization capability of support vector machines (SVM), a SVM-based classifier is proposed. In support vector machine training, the hyperparameters have very important roles for its recognition accuracy. Therefore, in the optimization module, the bees algorithm (BA) is proposed for selecting appropriate parameters of the classifier. The proposed system is tested on Wisconsin Breast Cancer database and simulation results show that the recommended system has a high accuracy. PMID:23626945
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Yu, Renping; Zhang, Han; An, Le; Chen, Xiaobo; Wei, Zhihui; Shen, Dinggang
2017-01-01
Brain functional network analysis has shown great potential in understanding brain functions and also in identifying biomarkers for brain diseases, such as Alzheimer's disease (AD) and its early stage, mild cognitive impairment (MCI). In these applications, accurate construction of biologically meaningful brain network is critical. Sparse learning has been widely used for brain network construction; however, its l1-norm penalty simply penalizes each edge of a brain network equally, without considering the original connectivity strength which is one of the most important inherent linkwise characters. Besides, based on the similarity of the linkwise connectivity, brain network shows prominent group structure (i.e., a set of edges sharing similar attributes). In this article, we propose a novel brain functional network modeling framework with a “connectivity strength-weighted sparse group constraint.” In particular, the network modeling can be optimized by considering both raw connectivity strength and its group structure, without losing the merit of sparsity. Our proposed method is applied to MCI classification, a challenging task for early AD diagnosis. Experimental results based on the resting-state functional MRI, from 50 MCI patients and 49 healthy controls, show that our proposed method is more effective (i.e., achieving a significantly higher classification accuracy, 84.8%) than other competing methods (e.g., sparse representation, accuracy = 65.6%). Post hoc inspection of the informative features further shows more biologically meaningful brain functional connectivities obtained by our proposed method. PMID:28150897
Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M
2017-05-01
Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.
NASA Astrophysics Data System (ADS)
Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin
2016-08-01
This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
Improvement of the Owner Distinction Method for Healing-Type Pet Robots
NASA Astrophysics Data System (ADS)
Nambo, Hidetaka; Kimura, Haruhiko; Hara, Mirai; Abe, Koji; Tajima, Takuya
In order to decrease human stress, Animal Assisted Therapy which applies pets to heal humans is attracted. However, since animals are insanitary and unsafe, it is difficult to practically apply animal pets in hospitals. For the reason, on behalf of animal pets, pet robots have been attracted. Since pet robots would have no problems in sanitation and safety, they are able to be applied as a substitute for animal pets in the therapy. In our previous study where pet robots distinguish their owners like an animal pet, we used a puppet type pet robot which has pressure type touch sensors. However, the accuracy of our method was not sufficient to practical use. In this paper, we propose a method to improve the accuracy of the distinction. The proposed method can be applied for capacitive touch sensors such as installed in AIBO in addition to pressure type touch sensors. Besides, this paper shows performance of the proposed method from experimental results and confirms the proposed method has improved performance of the distinction in the conventional method.
An information theory criteria based blind method for enumerating active users in DS-CDMA system
NASA Astrophysics Data System (ADS)
Samsami Khodadad, Farid; Abed Hodtani, Ghosheh
2014-11-01
In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.
Real-Time Single Frequency Precise Point Positioning Using SBAS Corrections
Li, Liang; Jia, Chun; Zhao, Lin; Cheng, Jianhua; Liu, Jianxu; Ding, Jicheng
2016-01-01
Real-time single frequency precise point positioning (PPP) is a promising technique for high-precision navigation with sub-meter or even centimeter-level accuracy because of its convenience and low cost. The navigation performance of single frequency PPP heavily depends on the real-time availability and quality of correction products for satellite orbits and satellite clocks. Satellite-based augmentation system (SBAS) provides the correction products in real-time, but they are intended to be used for wide area differential positioning at 1 meter level precision. By imposing the constraints for ionosphere error, we have developed a real-time single frequency PPP method by sufficiently utilizing SBAS correction products. The proposed PPP method are tested with static and kinematic data, respectively. The static experimental results show that the position accuracy of the proposed PPP method can reach decimeter level, and achieve an improvement of at least 30% when compared with the traditional SBAS method. The positioning convergence of the proposed PPP method can be achieved in 636 epochs at most in static mode. In the kinematic experiment, the position accuracy of the proposed PPP method can be improved by at least 20 cm relative to the SBAS method. Furthermore, it has revealed that the proposed PPP method can achieve decimeter level convergence within 500 s in the kinematic mode. PMID:27517930
Community Detection in Complex Networks via Clique Conductance.
Lu, Zhenqi; Wahlström, Johan; Nehorai, Arye
2018-04-13
Network science plays a central role in understanding and modeling complex systems in many areas including physics, sociology, biology, computer science, economics, politics, and neuroscience. One of the most important features of networks is community structure, i.e., clustering of nodes that are locally densely interconnected. Communities reveal the hierarchical organization of nodes, and detecting communities is of great importance in the study of complex systems. Most existing community-detection methods consider low-order connection patterns at the level of individual links. But high-order connection patterns, at the level of small subnetworks, are generally not considered. In this paper, we develop a novel community-detection method based on cliques, i.e., local complete subnetworks. The proposed method overcomes the deficiencies of previous similar community-detection methods by considering the mathematical properties of cliques. We apply the proposed method to computer-generated graphs and real-world network datasets. When applied to networks with known community structure, the proposed method detects the structure with high fidelity and sensitivity. When applied to networks with no a priori information regarding community structure, the proposed method yields insightful results revealing the organization of these complex networks. We also show that the proposed method is guaranteed to detect near-optimal clusters in the bipartition case.
Experimental evaluation of fingerprint verification system based on double random phase encoding
NASA Astrophysics Data System (ADS)
Suzuki, Hiroyuki; Yamaguchi, Masahiro; Yachida, Masuyoshi; Ohyama, Nagaaki; Tashima, Hideaki; Obi, Takashi
2006-03-01
We proposed a smart card holder authentication system that combines fingerprint verification with PIN verification by applying a double random phase encoding scheme. In this system, the probability of accurate verification of an authorized individual reduces when the fingerprint is shifted significantly. In this paper, a review of the proposed system is presented and preprocessing for improving the false rejection rate is proposed. In the proposed method, the position difference between two fingerprint images is estimated by using an optimized template for core detection. When the estimated difference exceeds the permissible level, the user inputs the fingerprint again. The effectiveness of the proposed method is confirmed by a computational experiment; its results show that the false rejection rate is improved.
Phase retrieval using a modified Shack-Hartmann wavefront sensor with defocus.
Li, Changwei; Li, Bangming; Zhang, Sijiong
2014-02-01
This paper proposes a modified Shack-Hartmann wavefront sensor for phase retrieval. The sensor is revamped by placing a detector at a defocused plane before the focal plane of the lenslet array of the Shack-Hartmann sensor. The algorithm for phase retrieval is an optimization with initial Zernike coefficients calculated by the conventional phase reconstruction of the Shack-Hartmann sensor. Numerical simulations show that the proposed sensor permits sensitive, accurate phase retrieval. Furthermore, experiments tested the feasibility of phase retrieval using the proposed sensor. The surface irregularity for a flat mirror was measured by the proposed method and a Veeco interferometer, respectively. The irregularity for the mirror measured by the proposed method is in very good agreement with that measured using the Veeco interferometer.
NASA Astrophysics Data System (ADS)
Mori, Kazuo; Naito, Katsuhiro; Kobayashi, Hideo
This paper proposes an asymmetric traffic accommodation scheme using a multihop transmission technique for CDMA/FDD cellular communication systems. The proposed scheme exploits the multihop transmission to downlink packet transmissions, which require the large transmission power at their single-hop transmissions, in order to increase the downlink capacity. In these multihop transmissions, vacant uplink band is used for the transmissions from relay stations to destination mobile stations, and this leads more capacity enhancement in the downlink communications. The relay route selection method and power control method for the multihop transmissions are also investigated in the proposed scheme. The proposed scheme is evaluated by computer simulation and the results show that the proposed scheme can achieve better system performance.
An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.
Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei
2016-01-11
Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.
NASA Astrophysics Data System (ADS)
Chen, Junxun; Cheng, Longsheng; Yu, Hui; Hu, Shaolin
2018-01-01
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002
Iodine Absorption Cells Purity Testing.
Hrabina, Jan; Zucco, Massimo; Philippe, Charles; Pham, Tuan Minh; Holá, Miroslava; Acef, Ouali; Lazar, Josef; Číp, Ondřej
2017-01-06
This article deals with the evaluation of the chemical purity of iodine-filled absorption cells and the optical frequency references used for the frequency locking of laser standards. We summarize the recent trends and progress in absorption cell technology and we focus on methods for iodine cell purity testing. We compare two independent experimental systems based on the laser-induced fluorescence method, showing an improvement of measurement uncertainty by introducing a compensation system reducing unwanted influences. We show the advantages of this technique, which is relatively simple and does not require extensive hardware equipment. As an alternative to the traditionally used methods we propose an approach of hyperfine transitions' spectral linewidth measurement. The key characteristic of this method is demonstrated on a set of testing iodine cells. The relationship between laser-induced fluorescence and transition linewidth methods will be presented as well as a summary of the advantages and disadvantages of the proposed technique (in comparison with traditional measurement approaches).
Iodine Absorption Cells Purity Testing
Hrabina, Jan; Zucco, Massimo; Philippe, Charles; Pham, Tuan Minh; Holá, Miroslava; Acef, Ouali; Lazar, Josef; Číp, Ondřej
2017-01-01
This article deals with the evaluation of the chemical purity of iodine-filled absorption cells and the optical frequency references used for the frequency locking of laser standards. We summarize the recent trends and progress in absorption cell technology and we focus on methods for iodine cell purity testing. We compare two independent experimental systems based on the laser-induced fluorescence method, showing an improvement of measurement uncertainty by introducing a compensation system reducing unwanted influences. We show the advantages of this technique, which is relatively simple and does not require extensive hardware equipment. As an alternative to the traditionally used methods we propose an approach of hyperfine transitions’ spectral linewidth measurement. The key characteristic of this method is demonstrated on a set of testing iodine cells. The relationship between laser-induced fluorescence and transition linewidth methods will be presented as well as a summary of the advantages and disadvantages of the proposed technique (in comparison with traditional measurement approaches). PMID:28067834
Image-guided spatial localization of heterogeneous compartments for magnetic resonance
An, Li; Shen, Jun
2015-01-01
Purpose: Image-guided localization SPectral Localization Achieved by Sensitivity Heterogeneity (SPLASH) allows rapid measurement of signals from irregularly shaped anatomical compartments without using phase encoding gradients. Here, the authors propose a novel method to address the issue of heterogeneous signal distribution within the localized compartments. Methods: Each compartment was subdivided into multiple subcompartments and their spectra were solved by Tikhonov regularization to enforce smoothness within each compartment. The spectrum of a given compartment was generated by combining the spectra of the components of that compartment. The proposed method was first tested using Monte Carlo simulations and then applied to reconstructing in vivo spectra from irregularly shaped ischemic stroke and normal tissue compartments. Results: Monte Carlo simulations demonstrate that the proposed regularized SPLASH method significantly reduces localization and metabolite quantification errors. In vivo results show that the intracompartment regularization results in ∼40% reduction of error in metabolite quantification. Conclusions: The proposed method significantly reduces localization errors and metabolite quantification errors caused by intracompartment heterogeneous signal distribution. PMID:26328977
Microaneurysm detection using fully convolutional neural networks.
Chudzik, Piotr; Majumdar, Somshubra; Calivá, Francesco; Al-Diri, Bashir; Hunter, Andrew
2018-05-01
Diabetic retinopathy is a microvascular complication of diabetes that can lead to sight loss if treated not early enough. Microaneurysms are the earliest clinical signs of diabetic retinopathy. This paper presents an automatic method for detecting microaneurysms in fundus photographies. A novel patch-based fully convolutional neural network with batch normalization layers and Dice loss function is proposed. Compared to other methods that require up to five processing stages, it requires only three. Furthermore, to the best of the authors' knowledge, this is the first paper that shows how to successfully transfer knowledge between datasets in the microaneurysm detection domain. The proposed method was evaluated using three publicly available and widely used datasets: E-Ophtha, DIARETDB1, and ROC. It achieved better results than state-of-the-art methods using the FROC metric. The proposed algorithm accomplished highest sensitivities for low false positive rates, which is particularly important for screening purposes. Performance, simplicity, and robustness of the proposed method demonstrates its suitability for diabetic retinopathy screening applications. Copyright © 2018 Elsevier B.V. All rights reserved.
Statistical Method to Overcome Overfitting Issue in Rational Function Models
NASA Astrophysics Data System (ADS)
Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.
2017-09-01
Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.
Zhao, Jiaduo; Gong, Weiguo; Tang, Yuzhen; Li, Weihong
2016-01-20
In this paper, we propose an effective human and nonhuman pyroelectric infrared (PIR) signal recognition method to reduce PIR detector false alarms. First, using the mathematical model of the PIR detector, we analyze the physical characteristics of the human and nonhuman PIR signals; second, based on the analysis results, we propose an empirical mode decomposition (EMD)-based symbolic dynamic analysis method for the recognition of human and nonhuman PIR signals. In the proposed method, first, we extract the detailed features of a PIR signal into five symbol sequences using an EMD-based symbolization method, then, we generate five feature descriptors for each PIR signal through constructing five probabilistic finite state automata with the symbol sequences. Finally, we use a weighted voting classification strategy to classify the PIR signals with their feature descriptors. Comparative experiments show that the proposed method can effectively classify the human and nonhuman PIR signals and reduce PIR detector's false alarms.
NASA Astrophysics Data System (ADS)
Hai-Jung In,; Oh-Kyong Kwon,
2010-03-01
A simple pixel structure using a video data correction method is proposed to compensate for electrical characteristic variations of driving thin-film transistors (TFTs) and the degradation of organic light-emitting diodes (OLEDs) in active-matrix OLED (AMOLED) displays. The proposed method senses the electrical characteristic variations of TFTs and OLEDs and stores them in external memory. The nonuniform emission current of TFTs and the aging of OLEDs are corrected by modulating video data using the stored data. Experimental results show that the emission current error due to electrical characteristic variation of driving TFTs is in the range from -63.1 to 61.4% without compensation, but is decreased to the range from -1.9 to 1.9% with the proposed correction method. The luminance error due to the degradation of an OLED is less than 1.8% when the proposed correction method is used for a 50% degraded OLED.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian
2017-01-01
In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.
Estimation of Monthly Near Surface Air Temperature Using Geographically Weighted Regression in China
NASA Astrophysics Data System (ADS)
Wang, M. M.; He, G. J.; Zhang, Z. M.; Zhang, Z. J.; Liu, X. G.
2018-04-01
Near surface air temperature (NSAT) is a primary descriptor of terrestrial environment conditions. The availability of NSAT with high spatial resolution is deemed necessary for several applications such as hydrology, meteorology and ecology. In this study, a regression-based NSAT mapping method is proposed. This method is combined remote sensing variables with geographical variables, and uses geographically weighted regression to estimate NSAT. The altitude was selected as geographical variable; and the remote sensing variables include land surface temperature (LST) and Normalized Difference vegetation index (NDVI). The performance of the proposed method was assessed by predict monthly minimum, mean, and maximum NSAT from point station measurements in China, a domain with a large area, complex topography, and highly variable station density, and the NSAT maps were validated against the meteorology observations. Validation results with meteorological data show the proposed method achieved an accuracy of 1.58 °C. It is concluded that the proposed method for mapping NSAT is very operational and has good precision.
Towards a robust HDR imaging system
NASA Astrophysics Data System (ADS)
Long, Xin; Zeng, Xiangrong; Huangpeng, Qizi; Zhou, Jinglun; Feng, Jing
2016-07-01
High dynamic range (HDR) images can show more details and luminance information in general display device than low dynamic image (LDR) images. We present a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods. Experiments on real images show the effectiveness and competitiveness of the proposed method.
Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling
NASA Astrophysics Data System (ADS)
Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji
We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.
Histogram equalization with Bayesian estimation for noise robust speech recognition.
Suh, Youngjoo; Kim, Hoirin
2018-02-01
The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.
Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali
2005-09-01
To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.
Urban noise functional stratification for estimating average annual sound level.
Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos
2015-06-01
Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time.
Goto, Takahiro; Aoyagi, Toshio
2018-01-01
Synchronization of neural oscillations as a mechanism of brain function is attracting increasing attention. Neural oscillation is a rhythmic neural activity that can be easily observed by noninvasive electroencephalography (EEG). Neural oscillations show the same frequency and cross-frequency synchronization for various cognitive and perceptual functions. However, it is unclear how this neural synchronization is achieved by a dynamical system. If neural oscillations are weakly coupled oscillators, the dynamics of neural synchronization can be described theoretically using a phase oscillator model. We propose an estimation method to identify the phase oscillator model from real data of cross-frequency synchronized activities. The proposed method can estimate the coupling function governing the properties of synchronization. Furthermore, we examine the reliability of the proposed method using time-series data obtained from numerical simulation and an electronic circuit experiment, and show that our method can estimate the coupling function correctly. Finally, we estimate the coupling function between EEG oscillation and the speech sound envelope, and discuss the validity of these results. PMID:29337999
Reinforcement learning state estimator.
Morimoto, Jun; Doya, Kenji
2007-03-01
In this study, we propose a novel use of reinforcement learning for estimating hidden variables and parameters of nonlinear dynamical systems. A critical issue in hidden-state estimation is that we cannot directly observe estimation errors. However, by defining errors of observable variables as a delayed penalty, we can apply a reinforcement learning frame-work to state estimation problems. Specifically, we derive a method to construct a nonlinear state estimator by finding an appropriate feedback input gain using the policy gradient method. We tested the proposed method on single pendulum dynamics and show that the joint angle variable could be successfully estimated by observing only the angular velocity, and vice versa. In addition, we show that we could acquire a state estimator for the pendulum swing-up task in which a swing-up controller is also acquired by reinforcement learning simultaneously. Furthermore, we demonstrate that it is possible to estimate the dynamics of the pendulum itself while the hidden variables are estimated in the pendulum swing-up task. Application of the proposed method to a two-linked biped model is also presented.
Time Series Decomposition into Oscillation Components and Phase Estimation.
Matsuda, Takeru; Komaki, Fumiyasu
2017-02-01
Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.
NASA Astrophysics Data System (ADS)
El Harti, Abderrazak; Lhissou, Rachid; Chokmani, Karem; Ouzemou, Jamal-eddine; Hassouna, Mohamed; Bachaoui, El Mostafa; El Ghmari, Abderrahmene
2016-08-01
Soil salinization is major environmental issue in irrigated agricultural production. Conventional methods for salinization monitoring are time and money consuming and limited by the high spatiotemporal variability of this phenomenon. This work aims to propose a spatiotemporal monitoring method of soil salinization in the Tadla plain in central Morocco using spectral indices derived from Thematic Mapper (TM) and Operational Land Imager (OLI) data. Six Landsat TM/OLI satellite images acquired during 13 years period (2000-2013) coupled with in-situ electrical conductivity (EC) measurements were used to develop the proposed method. After radiometric and atmospheric correction of TM/OLI images, a new soil salinity index (OLI-SI) is proposed for soil EC estimation. Validation shows that this index allowed a satisfactory EC estimation in the Tadla irrigated perimeter with coefficient of determination R2 varying from 0.55 to 0.77 and a Root Mean Square Error (RMSE) ranging between 1.02 dS/m and 2.35 dS/m. The times-series of salinity maps produced over the Tadla plain using the proposed method show that salinity is decreasing in intensity and progressively increasing in spatial extent, over the 2000-2013 period. This trend resulted in a decrease in agricultural activities in the southwestern part of the perimeter, located in the hydraulic downstream.
NASA Astrophysics Data System (ADS)
Ye, Y.
2017-09-01
This paper presents a fast and robust method for the registration of multimodal remote sensing data (e.g., optical, LiDAR, SAR and map). The proposed method is based on the hypothesis that structural similarity between images is preserved across different modalities. In the definition of the proposed method, we first develop a pixel-wise feature descriptor named Dense Orientated Gradient Histogram (DOGH), which can be computed effectively at every pixel and is robust to non-linear intensity differences between images. Then a fast similarity metric based on DOGH is built in frequency domain using the Fast Fourier Transform (FFT) technique. Finally, a template matching scheme is applied to detect tie points between images. Experimental results on different types of multimodal remote sensing images show that the proposed similarity metric has the superior matching performance and computational efficiency than the state-of-the-art methods. Moreover, based on the proposed similarity metric, we also design a fast and robust automatic registration system for multimodal images. This system has been evaluated using a pair of very large SAR and optical images (more than 20000 × 20000 pixels). Experimental results show that our system outperforms the two popular commercial software systems (i.e. ENVI and ERDAS) in both registration accuracy and computational efficiency.
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2013-01-01
We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as “our previous method”) using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as “our new method”). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal. PMID:23774988
Automatic 3D kidney segmentation based on shape constrained GC-OAAM
NASA Astrophysics Data System (ADS)
Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua
2011-03-01
The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.
Marker-free motion correction in weight-bearing cone-beam CT of the knee joint
Berger, M.; Müller, K.; Aichert, A.; Unberath, M.; Thies, J.; Choi, J.-H.; Fahrig, R.; Maier, A.
2016-01-01
Purpose: To allow for a purely image-based motion estimation and compensation in weight-bearing cone-beam computed tomography of the knee joint. Methods: Weight-bearing imaging of the knee joint in a standing position poses additional requirements for the image reconstruction algorithm. In contrast to supine scans, patient motion needs to be estimated and compensated. The authors propose a method that is based on 2D/3D registration of left and right femur and tibia segmented from a prior, motion-free reconstruction acquired in supine position. Each segmented bone is first roughly aligned to the motion-corrupted reconstruction of a scan in standing or squatting position. Subsequently, a rigid 2D/3D registration is performed for each bone to each of K projection images, estimating 6 × 4 × K motion parameters. The motion of individual bones is combined into global motion fields using thin-plate-spline extrapolation. These can be incorporated into a motion-compensated reconstruction in the backprojection step. The authors performed visual and quantitative comparisons between a state-of-the-art marker-based (MB) method and two variants of the proposed method using gradient correlation (GC) and normalized gradient information (NGI) as similarity measure for the 2D/3D registration. Results: The authors evaluated their method on four acquisitions under different squatting positions of the same patient. All methods showed substantial improvement in image quality compared to the uncorrected reconstructions. Compared to NGI and MB, the GC method showed increased streaking artifacts due to misregistrations in lateral projection images. NGI and MB showed comparable image quality at the bone regions. Because the markers are attached to the skin, the MB method performed better at the surface of the legs where the authors observed slight streaking of the NGI and GC methods. For a quantitative evaluation, the authors computed the universal quality index (UQI) for all bone regions with respect to the motion-free reconstruction. The authors quantitative evaluation over regions around the bones yielded a mean UQI of 18.4 for no correction, 53.3 and 56.1 for the proposed method using GC and NGI, respectively, and 53.7 for the MB reference approach. In contrast to the authors registration-based corrections, the MB reference method caused slight nonrigid deformations at bone outlines when compared to a motion-free reference scan. Conclusions: The authors showed that their method based on the NGI similarity measure yields reconstruction quality close to the MB reference method. In contrast to the MB method, the proposed method does not require any preparation prior to the examination which will improve the clinical workflow and patient comfort. Further, the authors found that the MB method causes small, nonrigid deformations at the bone outline which indicates that markers may not accurately reflect the internal motion close to the knee joint. Therefore, the authors believe that the proposed method is a promising alternative to MB motion management. PMID:26936708
Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B
2018-06-01
To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Single image super-resolution based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia
2018-03-01
We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.
NASA Astrophysics Data System (ADS)
Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian
2017-11-01
A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.
Iris recognition based on robust principal component analysis
NASA Astrophysics Data System (ADS)
Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong
2014-11-01
Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.
A denoising algorithm for CT image using low-rank sparse coding
NASA Astrophysics Data System (ADS)
Lei, Yang; Xu, Dong; Zhou, Zhengyang; Wang, Tonghe; Dong, Xue; Liu, Tian; Dhabaan, Anees; Curran, Walter J.; Yang, Xiaofeng
2018-03-01
We propose a denoising method of CT image based on low-rank sparse coding. The proposed method constructs an adaptive dictionary of image patches and estimates the sparse coding regularization parameters using the Bayesian interpretation. A low-rank approximation approach is used to simultaneously construct the dictionary and achieve sparse representation through clustering similar image patches. A variable-splitting scheme and a quadratic optimization are used to reconstruct CT image based on achieved sparse coefficients. We tested this denoising technology using phantom, brain and abdominal CT images. The experimental results showed that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.
Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai
2015-02-01
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications
NASA Astrophysics Data System (ADS)
Thanaborvornwiwat, N.; Patanukhom, K.
2018-04-01
Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.
Fuzzy forecasting based on fuzzy-trend logical relationship groups.
Chen, Shyi-Ming; Wang, Nai-Yi
2010-10-01
In this paper, we present a new method to predict the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) based on fuzzy-trend logical relationship groups (FTLRGs). The proposed method divides fuzzy logical relationships into FTLRGs based on the trend of adjacent fuzzy sets appearing in the antecedents of fuzzy logical relationships. First, we apply an automatic clustering algorithm to cluster the historical data into intervals of different lengths. Then, we define fuzzy sets based on these intervals of different lengths. Then, the historical data are fuzzified into fuzzy sets to derive fuzzy logical relationships. Then, we divide the fuzzy logical relationships into FTLRGs for forecasting the TAIEX. Moreover, we also apply the proposed method to forecast the enrollments and the inventory demand, respectively. The experimental results show that the proposed method gets higher average forecasting accuracy rates than the existing methods.
Failure prediction using machine learning and time series in optical network.
Wang, Zhilong; Zhang, Min; Wang, Danshi; Song, Chuang; Liu, Min; Li, Jin; Lou, Liqi; Liu, Zhuo
2017-08-07
In this paper, we propose a performance monitoring and failure prediction method in optical networks based on machine learning. The primary algorithms of this method are the support vector machine (SVM) and double exponential smoothing (DES). With a focus on risk-aware models in optical networks, the proposed protection plan primarily investigates how to predict the risk of an equipment failure. To the best of our knowledge, this important problem has not yet been fully considered. Experimental results showed that the average prediction accuracy of our method was 95% when predicting the optical equipment failure state. This finding means that our method can forecast an equipment failure risk with high accuracy. Therefore, our proposed DES-SVM method can effectively improve traditional risk-aware models to protect services from possible failures and enhance the optical network stability.
Smoke regions extraction based on two steps segmentation and motion detection in early fire
NASA Astrophysics Data System (ADS)
Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan
2018-03-01
Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.
Light Weight MP3 Watermarking Method for Mobile Terminals
NASA Astrophysics Data System (ADS)
Takagi, Koichi; Sakazawa, Shigeyuki; Takishima, Yasuhiro
This paper proposes a novel MP3 watermarking method which is applicable to a mobile terminal with limited computational resources. Considering that in most cases the embedded information is copyright information or metadata, which should be extracted before playing back audio contents, the watermark detection process should be executed at high speed. However, when conventional methods are used with a mobile terminal, it takes a considerable amount of time to detect a digital watermark. This paper focuses on scalefactor manipulation to enable high speed watermark embedding/detection for MP3 audio and also proposes the manipulation method which minimizes audio quality degradation adaptively. Evaluation tests showed that the proposed method is capable of embedding 3 bits/frame information without degrading audio quality and detecting it at very high speed. Finally, this paper describes application examples for authentication with a digital signature.
Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines
NASA Astrophysics Data System (ADS)
Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž
2017-05-01
This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces
Wang, Shijun; Yao, Jianhua; Liu, Jiamin; Petrick, Nicholas; Van Uitert, Robert L.; Periaswamy, Senthil; Summers, Ronald M.
2009-01-01
Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice—Once supine and once prone—to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined by the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27±52.97 to 14.98 mm±11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline. PMID:20095272
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Shijun; Yao Jianhua; Liu Jiamin
Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice--Once supine and once prone--to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined bymore » the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27{+-}52.97 to 14.98 mm{+-}11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline.« less
Automated segmentation of serous pigment epithelium detachment in SD-OCT images
NASA Astrophysics Data System (ADS)
Sun, Zhuli; Shi, Fei; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian
2015-03-01
Pigment epithelium detachment (PED) is an important clinical manifestation of multiple chorio-retinal disease processes, which can cause the loss of central vision. A 3-D method is proposed to automatically segment serous PED in SD-OCT images. The proposed method consists of five steps: first, a curvature anisotropic diffusion filter is applied to remove speckle noise. Second, the graph search method is applied for abnormal retinal layer segmentation associated with retinal pigment epithelium (RPE) deformation. During this process, Bruch's membrane, which doesn't show in the SD-OCT images, is estimated with the convex hull algorithm. Third, the foreground and background seeds are automatically obtained from retinal layer segmentation result. Fourth, the serous PED is segmented based on the graph cut method. Finally, a post-processing step is applied to remove false positive regions based on mathematical morphology. The proposed method was tested on 20 SD-OCT volumes from 20 patients diagnosed with serous PED. The average true positive volume fraction (TPVF), false positive volume fraction (FPVF), dice similarity coefficient (DSC) and positive predictive value (PPV) are 97.19%, 0.03%, 96.34% and 95.59%, respectively. Linear regression analysis shows a strong correlation (r = 0.975) comparing the segmented PED volumes with the ground truth labeled by an ophthalmology expert. The proposed method can provide clinicians with accurate quantitative information, including shape, size and position of the PED regions, which can assist diagnose and treatment.
Intra prediction using face continuity in 360-degree video coding
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; He, Yuwen; Ye, Yan
2017-09-01
This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.
A novel method for overlapping community detection using Multi-objective optimization
NASA Astrophysics Data System (ADS)
Ebrahimi, Morteza; Shahmoradi, Mohammad Reza; Heshmati, Zainabolhoda; Salehi, Mostafa
2018-09-01
The problem of community detection as one of the most important applications of network science can be addressed effectively by multi-objective optimization. In this paper, we aim to present a novel efficient method based on this approach. Also, in this study the idea of using all Pareto fronts to detect overlapping communities is introduced. The proposed method has two main advantages compared to other multi-objective optimization based approaches. The first advantage is scalability, and the second is the ability to find overlapping communities. Despite most of the works, the proposed method is able to find overlapping communities effectively. The new algorithm works by extracting appropriate communities from all the Pareto optimal solutions, instead of choosing the one optimal solution. Empirical experiments on different features of separated and overlapping communities, on both synthetic and real networks show that the proposed method performs better in comparison with other methods.
Initial Results in Using a Self-Coherence Method for Detecting Sustained Oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Dagle, Jeffery E.
2015-01-01
This paper develops a self-coherence method for detecting sustained oscillations using phasor measurement unit (PMU) data. Sustained oscillations decrease system performance and introduce potential reliability issues. Timely detection of the oscillations at an early stage provides the opportunity for taking remedial reaction. Using high-speed time-synchronized PMU data, this paper details a self-coherence method for detecting sustained oscillation, even when the oscillation amplitude is lower than ambient noise. Simulation and field measurement data are used to evaluate the proposed method’s performance. It is shown that the proposed method can detect sustained oscillations and estimate oscillation frequencies with a low signal-to-noise ratio.more » Comparison with a power spectral density method also shows that the proposed self-coherence method performs better. Index Terms—coherence, power spectral density, phasor measurement unit (PMU), oscillations, power system dynamics« less
Shariat, M H; Gazor, S; Redfearn, D
2015-08-01
Atrial fibrillation (AF), the most common sustained cardiac arrhythmia, is an extremely costly public health problem. Catheter-based ablation is a common minimally invasive procedure to treat AF. Contemporary mapping methods are highly dependent on the accuracy of anatomic localization of rotor sources within the atria. In this paper, using simulated atrial intracardiac electrograms (IEGMs) during AF, we propose a computationally efficient method for localizing the tip of the electrical rotor with an Archimedean/arithmetic spiral wavefront. The proposed method deploys the locations of electrodes of a catheter and their IEGMs activation times to estimate the unknown parameters of the spiral wavefront including its tip location. The proposed method is able to localize the spiral as soon as the wave hits three electrodes of the catheter. Our simulation results show that the method can efficiently localize the spiral wavefront that rotates either clockwise or counterclockwise.
Multi person detection and tracking based on hierarchical level-set method
NASA Astrophysics Data System (ADS)
Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid
2018-04-01
In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
NASA Astrophysics Data System (ADS)
Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao
2011-05-01
According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).
Two new modified Gauss-Seidel methods for linear system with M-matrices
NASA Astrophysics Data System (ADS)
Zheng, Bing; Miao, Shu-Xin
2009-12-01
In 2002, H. Kotakemori et al. proposed the modified Gauss-Seidel (MGS) method for solving the linear system with the preconditioner [H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner () J. Comput. Appl. Math. 145 (2002) 373-378]. Since this preconditioner is constructed by only the largest element on each row of the upper triangular part of the coefficient matrix, the preconditioning effect is not observed on the nth row. In the present paper, to deal with this drawback, we propose two new preconditioners. The convergence and comparison theorems of the modified Gauss-Seidel methods with these two preconditioners for solving the linear system are established. The convergence rates of the new proposed preconditioned methods are compared. In addition, numerical experiments are used to show the effectiveness of the new MGS methods.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel Antonio; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Marin-Hernandez, Antonio; Herrera-May, Agustin Leobardo; Diaz-Sanchez, Alejandro; Huerta-Chua, Jesus
2014-01-01
In this article, we propose the application of a modified Taylor series method (MTSM) for the approximation of nonlinear problems described on finite intervals. The issue of Taylor series method with mixed boundary conditions is circumvented using shooting constants and extra derivatives of the problem. In order to show the benefits of this proposal, three different kinds of problems are solved: three-point boundary valued problem (BVP) of third-order with a hyperbolic sine nonlinearity, two-point BVP for a second-order nonlinear differential equation with an exponential nonlinearity, and a two-point BVP for a third-order nonlinear differential equation with a radical nonlinearity. The result shows that the MTSM method is capable to generate easily computable and highly accurate approximations for nonlinear equations. 34L30.
Full-field stress determination in photoelasticity with phase shifting technique
NASA Astrophysics Data System (ADS)
Guo, Enhai; Liu, Yonggang; Han, Yongsheng; Arola, Dwayne; Zhang, Dongsheng
2018-04-01
Photoelasticity is an effective method for evaluating the stress and its spatial variations within a stressed body. In the present study, a method to determine the stress distribution by means of phase shifting and a modified shear-difference is proposed. First, the orientation of the first principal stress and the retardation between the principal stresses are determined in the full-field through phase shifting. Then, through bicubic interpolation and derivation of a modified shear-difference method, the internal stress is calculated from the point with a free boundary along its normal direction. A method to reduce integration error in the shear difference scheme is proposed and compared to the existing methods; the integration error is reduced when using theoretical photoelastic parameters to calculate the stress component with the same points. Results show that when the value of Δx/Δy approaches one, the error is minimum, and although the interpolation error is inevitable, it has limited influence on the accuracy of the result. Finally, examples are presented for determining the stresses in a circular plate and ring subjected to diametric loading. Results show that the proposed approach provides a complete solution for determining the full-field stresses in photoelastic models.
NASA Astrophysics Data System (ADS)
Bolduc, A.; Gauthier, P.-A.; Berry, A.
2017-12-01
While perceptual evaluation and sound quality testing with jury are now recognized as essential parts of acoustical product development, they are rarely implemented with spatial sound field reproduction. Instead, monophonic, stereophonic or binaural presentations are used. This paper investigates the workability and interest of a method to use complete vibroacoustic engineering models for auralization based on 2.5D Wave Field Synthesis (WFS). This method is proposed in order that spatial characteristics such as directivity patterns and direction-of-arrival are part of the reproduced sound field while preserving the model complete formulation that coherently combines frequency and spatial responses. Modifications to the standard 2.5D WFS operators are proposed for extended primary sources, affecting the reference line definition and compensating for out-of-plane elementary primary sources. Reported simulations and experiments of reproductions of two physically-accurate vibroacoustic models of thin plates show that the proposed method allows for an effective reproduction in the horizontal plane: Spatial and frequency domains features are recreated. Application of the method to the sound rendering of a virtual transmission loss measurement setup shows the potential of the method for use in virtual acoustical prototyping for jury testing.
Mastication Evaluation With Unsupervised Learning: Using an Inertial Sensor-Based System.
Lucena, Caroline Vieira; Lacerda, Marcelo; Caldas, Rafael; De Lima Neto, Fernando Buarque; Rativa, Diego
2018-01-01
There is a direct relationship between the prevalence of musculoskeletal disorders of the temporomandibular joint and orofacial disorders. A well-elaborated analysis of the jaw movements provides relevant information for healthcare professionals to conclude their diagnosis. Different approaches have been explored to track jaw movements such that the mastication analysis is getting less subjective; however, all methods are still highly subjective, and the quality of the assessments depends much on the experience of the health professional. In this paper, an accurate and non-invasive method based on a commercial low-cost inertial sensor (MPU6050) to measure jaw movements is proposed. The jaw-movement feature values are compared to the obtained with clinical analysis, showing no statistically significant difference between both methods. Moreover, We propose to use unsupervised paradigm approaches to cluster mastication patterns of healthy subjects and simulated patients with facial trauma. Two techniques were used in this paper to instantiate the method: Kohonen's Self-Organizing Maps and K-Means Clustering. Both algorithms have excellent performances to process jaw-movements data, showing encouraging results and potential to bring a full assessment of the masticatory function. The proposed method can be applied in real-time providing relevant dynamic information for health-care professionals.
Essential protein discovery based on a combination of modularity and conservatism.
Zhao, Bihai; Wang, Jianxin; Li, Xueyong; Wu, Fang-Xiang
2016-11-01
Essential proteins are indispensable for the survival of a living organism and play important roles in the emerging field of synthetic biology. Many computational methods have been proposed to identify essential proteins by using the topological features of interactome networks. However, most of these methods ignored intrinsic biological meaning of proteins. Researches show that essentiality is tied not only to the protein or gene itself, but also to the molecular modules to which that protein belongs. The results of this study reveal the modularity of essential proteins. On the other hand, essential proteins are more evolutionarily conserved than nonessential proteins and frequently bind each other. That is to say, conservatism is another important feature of essential proteins. Multiple networks are constructed by integrating protein-protein interaction (PPI) networks, time course gene expression data and protein domain information. Based on these networks, a new essential protein identification method is proposed based on a combination of modularity and conservatism of proteins. Experimental results show that the proposed method outperforms other essential protein identification methods in terms of a number essential protein out of top ranked candidates. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
Electrohydrodynamic assisted droplet alignment for lens fabrication by droplet evaporation
NASA Astrophysics Data System (ADS)
Wang, Guangxu; Deng, Jia; Guo, Xing
2018-04-01
Lens fabrication by droplet evaporation has attracted a lot of attention since the fabrication approach is simple and moldless. Droplet position accuracy is a critical parameter in this approach, and thus it is of great importance to use accurate methods to realize the droplet position alignment. In this paper, we propose an electrohydrodynamic (EHD) assisted droplet alignment method. An electrostatic force was induced at the interface between materials to overcome the surface tension and gravity. The deviation of droplet position from the center region was eliminated and alignment was successfully realized. We demonstrated the capability of the proposed method theoretically and experimentally. First, we built a simulation model coupled with the three-phase flow formulations and the EHD equations to study the three-phase flowing process in an electric field. Results show that it is the uneven electric field distribution that leads to the relative movement of the droplet. Then, we conducted experiments to verify the method. Experimental results are consistent with the numerical simulation results. Moreover, we successfully fabricated a crater lens after applying the proposed method. A light emitting diode module packaging with the fabricated crater lens shows a significant light intensity distribution adjustment compared with a spherical cap lens.
NASA Astrophysics Data System (ADS)
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
Aggarwal, Priya; Gupta, Anubha
2017-12-01
A number of reconstruction methods have been proposed recently for accelerated functional Magnetic Resonance Imaging (fMRI) data collection. However, existing methods suffer with the challenge of greater artifacts at high acceleration factors. This paper addresses the issue of accelerating fMRI collection via undersampled k-space measurements combined with the proposed method based on l 1 -l 1 norm constraints, wherein we impose first l 1 -norm sparsity on the voxel time series (temporal data) in the transformed domain and the second l 1 -norm sparsity on the successive difference of the same temporal data. Hence, we name the proposed method as Double Temporal Sparsity based Reconstruction (DTSR) method. The robustness of the proposed DTSR method has been thoroughly evaluated both at the subject level and at the group level on real fMRI data. Results are presented at various acceleration factors. Quantitative analysis in terms of Peak Signal-to-Noise Ratio (PSNR) and other metrics, and qualitative analysis in terms of reproducibility of brain Resting State Networks (RSNs) demonstrate that the proposed method is accurate and robust. In addition, the proposed DTSR method preserves brain networks that are important for studying fMRI data. Compared to the existing methods, the DTSR method shows promising potential with an improvement of 10-12 dB in PSNR with acceleration factors upto 3.5 on resting state fMRI data. Simulation results on real data demonstrate that DTSR method can be used to acquire accelerated fMRI with accurate detection of RSNs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fetal ECG extraction via Type-2 adaptive neuro-fuzzy inference systems.
Ahmadieh, Hajar; Asl, Babak Mohammadzadeh
2017-04-01
We proposed a noninvasive method for separating the fetal ECG (FECG) from maternal ECG (MECG) by using Type-2 adaptive neuro-fuzzy inference systems. The method can extract FECG components from abdominal signal by using one abdominal channel, including maternal and fetal cardiac signals and other environmental noise signals, and one chest channel. The proposed algorithm detects the nonlinear dynamics of the mother's body. So, the components of the MECG are estimated from the abdominal signal. By subtracting estimated mother cardiac signal from abdominal signal, fetal cardiac signal can be extracted. This algorithm was applied on synthetic ECG signals generated based on the models developed by McSharry et al. and Behar et al. and also on DaISy real database. In environments with high uncertainty, our method performs better than the Type-1 fuzzy method. Specifically, in evaluation of the algorithm with the synthetic data based on McSharry model, for input signals with SNR of -5dB, the SNR of the extracted FECG was improved by 38.38% in comparison with the Type-1 fuzzy method. Also, the results show that increasing the uncertainty or decreasing the input SNR leads to increasing the percentage of the improvement in SNR of the extracted FECG. For instance, when the SNR of the input signal decreases to -30dB, our proposed algorithm improves the SNR of the extracted FECG by 71.06% with respect to the Type-1 fuzzy method. The same results were obtained on synthetic data based on Behar model. Our results on real database reflect the success of the proposed method to separate the maternal and fetal heart signals even if their waves overlap in time. Moreover, the proposed algorithm was applied to the simulated fetal ECG with ectopic beats and achieved good results in separating FECG from MECG. The results show the superiority of the proposed Type-2 neuro-fuzzy inference method over the Type-1 neuro-fuzzy inference and the polynomial networks methods, which is due to its capability to capture the nonlinearities of the model better. Copyright © 2017 Elsevier B.V. All rights reserved.
Lee, Dong-Gi; Shin, Hyunjung
2017-05-18
Recently, research on human disease network has succeeded and has become an aid in figuring out the relationship between various diseases. In most disease networks, however, the relationship between diseases has been simply represented as an association. This representation results in the difficulty of identifying prior diseases and their influence on posterior diseases. In this paper, we propose a causal disease network that implements disease causality through text mining on biomedical literature. To identify the causality between diseases, the proposed method includes two schemes: the first is the lexicon-based causality term strength, which provides the causal strength on a variety of causality terms based on lexicon analysis. The second is the frequency-based causality strength, which determines the direction and strength of causality based on document and clause frequencies in the literature. We applied the proposed method to 6,617,833 PubMed literature, and chose 195 diseases to construct a causal disease network. From all possible pairs of disease nodes in the network, 1011 causal pairs of 149 diseases were extracted. The resulting network was compared with that of a previous study. In terms of both coverage and quality, the proposed method showed outperforming results; it determined 2.7 times more causalities and showed higher correlation with associated diseases than the existing method. This research has novelty in which the proposed method circumvents the limitations of time and cost in applying all possible causalities in biological experiments and it is a more advanced text mining technique by defining the concepts of causality term strength.
Population clustering based on copy number variations detected from next generation sequencing data.
Duan, Junbo; Zhang, Ji-Gang; Wan, Mingxi; Deng, Hong-Wen; Wang, Yu-Ping
2014-08-01
Copy number variations (CNVs) can be used as significant bio-markers and next generation sequencing (NGS) provides a high resolution detection of these CNVs. But how to extract features from CNVs and further apply them to genomic studies such as population clustering have become a big challenge. In this paper, we propose a novel method for population clustering based on CNVs from NGS. First, CNVs are extracted from each sample to form a feature matrix. Then, this feature matrix is decomposed into the source matrix and weight matrix with non-negative matrix factorization (NMF). The source matrix consists of common CNVs that are shared by all the samples from the same group, and the weight matrix indicates the corresponding level of CNVs from each sample. Therefore, using NMF of CNVs one can differentiate samples from different ethnic groups, i.e. population clustering. To validate the approach, we applied it to the analysis of both simulation data and two real data set from the 1000 Genomes Project. The results on simulation data demonstrate that the proposed method can recover the true common CNVs with high quality. The results on the first real data analysis show that the proposed method can cluster two family trio with different ancestries into two ethnic groups and the results on the second real data analysis show that the proposed method can be applied to the whole-genome with large sample size consisting of multiple groups. Both results demonstrate the potential of the proposed method for population clustering.
Design of tree structured matched wavelet for HRV signals of menstrual cycle.
Rawal, Kirti; Saini, B S; Saini, Indu
2016-07-01
An algorithm is presented for designing a new class of wavelets matched to the Heart Rate Variability (HRV) signals of the menstrual cycle. The proposed wavelets are used to find HRV variations between phases of menstrual cycle. The method finds the signal matching characteristics by minimising the shape feature error using Least Mean Square method. The proposed filter banks are used for the decomposition of the HRV signal. For reconstructing the original signal, the tree structure method is used. In this approach, decomposed sub-bands are selected based upon their energy in each sub-band. Thus, instead of using all sub-bands for reconstruction, sub-bands having high energy content are used for the reconstruction of signal. Thus, a lower number of sub-bands are required for reconstruction of the original signal which shows the effectiveness of newly created filter coefficients. Results show that proposed wavelets are able to differentiate HRV variations between phases of the menstrual cycle accurately than standard wavelets.
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming
2016-12-01
An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.
Novel techniques for enhancement and segmentation of acne vulgaris lesions.
Malik, A S; Humayun, J; Kamel, N; Yap, F B-B
2014-08-01
More than 99% acne patients suffer from acne vulgaris. While diagnosing the severity of acne vulgaris lesions, dermatologists have observed inter-rater and intra-rater variability in diagnosis results. This is because during assessment, identifying lesion types and their counting is a tedious job for dermatologists. To make the assessment job objective and easier for dermatologists, an automated system based on image processing methods is proposed in this study. There are two main objectives: (i) to develop an algorithm for the enhancement of various acne vulgaris lesions; and (ii) to develop a method for the segmentation of enhanced acne vulgaris lesions. For the first objective, an algorithm is developed based on the theory of high dynamic range (HDR) images. The proposed algorithm uses local rank transform to generate the HDR images from a single acne image followed by the log transformation. Then, segmentation is performed by clustering the pixels based on Mahalanobis distance of each pixel from spectral models of acne vulgaris lesions. Two metrics are used to evaluate the enhancement of acne vulgaris lesions, i.e., contrast improvement factor (CIF) and image contrast normalization (ICN). The proposed algorithm is compared with two other methods. The proposed enhancement algorithm shows better result than both the other methods based on CIF and ICN. In addition, sensitivity and specificity are calculated for the segmentation results. The proposed segmentation method shows higher sensitivity and specificity than other methods. This article specifically discusses the contrast enhancement and segmentation for automated diagnosis system of acne vulgaris lesions. The results are promising that can be used for further classification of acne vulgaris lesions for final grading of the lesions. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Marker-free motion correction in weight-bearing cone-beam CT of the knee joint.
Berger, M; Müller, K; Aichert, A; Unberath, M; Thies, J; Choi, J-H; Fahrig, R; Maier, A
2016-03-01
To allow for a purely image-based motion estimation and compensation in weight-bearing cone-beam computed tomography of the knee joint. Weight-bearing imaging of the knee joint in a standing position poses additional requirements for the image reconstruction algorithm. In contrast to supine scans, patient motion needs to be estimated and compensated. The authors propose a method that is based on 2D/3D registration of left and right femur and tibia segmented from a prior, motion-free reconstruction acquired in supine position. Each segmented bone is first roughly aligned to the motion-corrupted reconstruction of a scan in standing or squatting position. Subsequently, a rigid 2D/3D registration is performed for each bone to each of K projection images, estimating 6 × 4 × K motion parameters. The motion of individual bones is combined into global motion fields using thin-plate-spline extrapolation. These can be incorporated into a motion-compensated reconstruction in the backprojection step. The authors performed visual and quantitative comparisons between a state-of-the-art marker-based (MB) method and two variants of the proposed method using gradient correlation (GC) and normalized gradient information (NGI) as similarity measure for the 2D/3D registration. The authors evaluated their method on four acquisitions under different squatting positions of the same patient. All methods showed substantial improvement in image quality compared to the uncorrected reconstructions. Compared to NGI and MB, the GC method showed increased streaking artifacts due to misregistrations in lateral projection images. NGI and MB showed comparable image quality at the bone regions. Because the markers are attached to the skin, the MB method performed better at the surface of the legs where the authors observed slight streaking of the NGI and GC methods. For a quantitative evaluation, the authors computed the universal quality index (UQI) for all bone regions with respect to the motion-free reconstruction. The authors quantitative evaluation over regions around the bones yielded a mean UQI of 18.4 for no correction, 53.3 and 56.1 for the proposed method using GC and NGI, respectively, and 53.7 for the MB reference approach. In contrast to the authors registration-based corrections, the MB reference method caused slight nonrigid deformations at bone outlines when compared to a motion-free reference scan. The authors showed that their method based on the NGI similarity measure yields reconstruction quality close to the MB reference method. In contrast to the MB method, the proposed method does not require any preparation prior to the examination which will improve the clinical workflow and patient comfort. Further, the authors found that the MB method causes small, nonrigid deformations at the bone outline which indicates that markers may not accurately reflect the internal motion close to the knee joint. Therefore, the authors believe that the proposed method is a promising alternative to MB motion management.
Adib, Mani; Cretu, Edmond
2013-01-01
We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of −1.625 dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786
NASA Astrophysics Data System (ADS)
Ghaderi, A. H.; Darooneh, A. H.
The behavior of nonlinear systems can be analyzed by artificial neural networks. Air temperature change is one example of the nonlinear systems. In this work, a new neural network method is proposed for forecasting maximum air temperature in two cities. In this method, the regular graph concept is used to construct some partially connected neural networks that have regular structures. The learning results of fully connected ANN and networks with proposed method are compared. In some case, the proposed method has the better result than conventional ANN. After specifying the best network, the effect of input pattern numbers on the prediction is studied and the results show that the increase of input patterns has a direct effect on the prediction accuracy.
A multi-layer steganographic method based on audio time domain segmented and network steganography
NASA Astrophysics Data System (ADS)
Xue, Pengfei; Liu, Hanlin; Hu, Jingsong; Hu, Ronggui
2018-05-01
Both audio steganography and network steganography are belong to modern steganography. Audio steganography has a large capacity. Network steganography is difficult to detect or track. In this paper, a multi-layer steganographic method based on the collaboration of them (MLS-ATDSS&NS) is proposed. MLS-ATDSS&NS is realized in two covert layers (audio steganography layer and network steganography layer) by two steps. A new audio time domain segmented steganography (ATDSS) method is proposed in step 1, and the collaboration method of ATDSS and NS is proposed in step 2. The experimental results showed that the advantage of MLS-ATDSS&NS over others is better trade-off between capacity, anti-detectability and robustness, that means higher steganographic capacity, better anti-detectability and stronger robustness.
Blurred image recognition by legendre moment invariants
Zhang, Hui; Shu, Huazhong; Han, Guo-Niu; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis
2010-01-01
Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments. PMID:19933003
NASA Astrophysics Data System (ADS)
Hu, Jinyan; Li, Li; Yang, Yunfeng
2017-06-01
The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.
Multiple-image hiding using super resolution reconstruction in high-frequency domains
NASA Astrophysics Data System (ADS)
Li, Xiao-Wei; Zhao, Wu-Xiang; Wang, Jun; Wang, Qiong-Hua
2017-12-01
In this paper, a robust multiple-image hiding method using the computer-generated integral imaging and the modified super-resolution reconstruction algorithm is proposed. In our work, the host image is first transformed into frequency domains by cellular automata (CA), to assure the quality of the stego-image, the secret images are embedded into the CA high-frequency domains. The proposed method has the following advantages: (1) robustness to geometric attacks because of the memory-distributed property of elemental images, (2) increasing quality of the reconstructed secret images as the scheme utilizes the modified super-resolution reconstruction algorithm. The simulation results show that the proposed multiple-image hiding method outperforms other similar hiding methods and is robust to some geometric attacks, e.g., Gaussian noise and JPEG compression attacks.
NASA Astrophysics Data System (ADS)
Cai, Jiaxin; Chen, Tingting; Li, Yan; Zhu, Nenghui; Qiu, Xuan
2018-03-01
In order to analysis the fibrosis stage and inflammatory activity grade of chronic hepatitis C, a novel classification method based on collaborative representation (CR) with smoothly clipped absolute deviation penalty (SCAD) penalty term, called CR-SCAD classifier, is proposed for pattern recognition. After that, an auto-grading system based on CR-SCAD classifier is introduced for the prediction of fibrosis stage and inflammatory activity grade of chronic hepatitis C. The proposed method has been tested on 123 clinical cases of chronic hepatitis C based on serological indexes. Experimental results show that the performance of the proposed method outperforms the state-of-the-art baselines for the classification of fibrosis stage and inflammatory activity grade of chronic hepatitis C.
Infrared face recognition based on LBP histogram and KW feature selection
NASA Astrophysics Data System (ADS)
Xie, Zhihua
2014-07-01
The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).
Fast cat-eye effect target recognition based on saliency extraction
NASA Astrophysics Data System (ADS)
Li, Li; Ren, Jianlin; Wang, Xingbin
2015-09-01
Background complexity is a main reason that results in false detection in cat-eye target recognition. Human vision has selective attention property which can help search the salient target from complex unknown scenes quickly and precisely. In the paper, we propose a novel cat-eye effect target recognition method named Multi-channel Saliency Processing before Fusion (MSPF). This method combines traditional cat-eye target recognition with the selective characters of visual attention. Furthermore, parallel processing enables it to achieve fast recognition. Experimental results show that the proposed method performs better in accuracy, robustness and speed compared to other methods.
A multi-frequency iterative imaging method for discontinuous inverse medium problem
NASA Astrophysics Data System (ADS)
Zhang, Lei; Feng, Lixin
2018-06-01
The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.
Finger vein recognition using local line binary pattern.
Rosdi, Bakhtiar Affendi; Shing, Chai Wuh; Suandi, Shahrel Azmin
2011-01-01
In this paper, a personal verification method using finger vein is presented. Finger vein can be considered more secured compared to other hands based biometric traits such as fingerprint and palm print because the features are inside the human body. In the proposed method, a new texture descriptor called local line binary pattern (LLBP) is utilized as feature extraction technique. The neighbourhood shape in LLBP is a straight line, unlike in local binary pattern (LBP) which is a square shape. Experimental results show that the proposed method using LLBP has better performance than the previous methods using LBP and local derivative pattern (LDP).
New method for characterization of retroreflective materials
NASA Astrophysics Data System (ADS)
Junior, O. S.; Silva, E. S.; Barros, K. N.; Vitro, J. G.
2018-03-01
The present article aims to propose a new method of analyzing the properties of retroreflective materials using a goniophotometer. The aim is to establish a higher resolution test method with a wide range of viewing angles, taking into account a three-dimensional analysis of the retroreflection of the tested material. The validation was performed by collecting data from specimens collected from materials used in safety clothing and road signs. The approach showed that the results obtained by the proposed method are comparable to the results obtained by the normative protocols, representing an evolution for the metrology of these materials.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod
2010-04-01
For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.
A semi-automated method for bone age assessment using cervical vertebral maturation.
Baptista, Roberto S; Quaglio, Camila L; Mourad, Laila M E H; Hummel, Anderson D; Caetano, Cesar Augusto C; Ortolani, Cristina Lúcia F; Pisa, Ivan T
2012-07-01
To propose a semi-automated method for pattern classification to predict individuals' stage of growth based on morphologic characteristics that are described in the modified cervical vertebral maturation (CVM) method of Baccetti et al. A total of 188 lateral cephalograms were collected, digitized, evaluated manually, and grouped into cervical stages by two expert examiners. Landmarks were located on each image and measured. Three pattern classifiers based on the Naïve Bayes algorithm were built and assessed using a software program. The classifier with the greatest accuracy according to the weighted kappa test was considered best. The classifier showed a weighted kappa coefficient of 0.861 ± 0.020. If an adjacent estimated pre-stage or poststage value was taken to be acceptable, the classifier would show a weighted kappa coefficient of 0.992 ± 0.019. Results from this study show that the proposed semi-automated pattern classification method can help orthodontists identify the stage of CVM. However, additional studies are needed before this semi-automated classification method for CVM assessment can be implemented in clinical practice.
New Finger Biometric Method Using Near Infrared Imaging
Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul
2011-01-01
In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741
Rotation invariant deep binary hashing for fast image retrieval
NASA Astrophysics Data System (ADS)
Dai, Lai; Liu, Jianming; Jiang, Aiwen
2017-07-01
In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.
Optimal Placement of Dynamic Var Sources by Using Empirical Controllability Covariance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Huang, Weihong; Sun, Kai
In this paper, the empirical controllability covariance (ECC), which is calculated around the considered operating condition of a power system, is applied to quantify the degree of controllability of system voltages under specific dynamic var source locations. An optimal dynamic var source placement method addressing fault-induced delayed voltage recovery (FIDVR) issues is further formulated as an optimization problem that maximizes the determinant of ECC. The optimization problem is effectively solved by the NOMAD solver, which implements the mesh adaptive direct search algorithm. The proposed method is tested on an NPCC 140-bus system and the results show that the proposed methodmore » with fault specified ECC can solve the FIDVR issue caused by the most severe contingency with fewer dynamic var sources than the voltage sensitivity index (VSI)-based method. The proposed method with fault unspecified ECC does not depend on the settings of the contingency and can address more FIDVR issues than the VSI method when placing the same number of SVCs under different fault durations. It is also shown that the proposed method can help mitigate voltage collapse.« less
Damage Detection Based on Static Strain Responses Using FBG in a Wind Turbine Blade.
Tian, Shaohua; Yang, Zhibo; Chen, Xuefeng; Xie, Yong
2015-08-14
The damage detection of a wind turbine blade enables better operation of the turbines, and provides an early alert to the destroyed events of the blade in order to avoid catastrophic losses. A new non-baseline damage detection method based on the Fiber Bragg grating (FBG) in a wind turbine blade is developed in this paper. Firstly, the Chi-square distribution is proven to be an effective damage-sensitive feature which is adopted as the individual information source for the local decision. In order to obtain the global and optimal decision for the damage detection, the feature information fusion (FIF) method is proposed to fuse and optimize information in above individual information sources, and the damage is detected accurately through of the global decision. Then a 13.2 m wind turbine blade with the distributed strain sensor system is adopted to describe the feasibility of the proposed method, and the strain energy method (SEM) is used to describe the advantage of the proposed method. Finally results show that the proposed method can deliver encouraging results of the damage detection in the wind turbine blade.
De novo peptide sequencing using CID and HCD spectra pairs.
Yan, Yan; Kusalik, Anthony J; Wu, Fang-Xiang
2016-10-01
In tandem mass spectrometry (MS/MS), there are several different fragmentation techniques possible, including, collision-induced dissociation (CID) higher energy collisional dissociation (HCD), electron-capture dissociation (ECD), and electron transfer dissociation (ETD). When using pairs of spectra for de novo peptide sequencing, the most popular methods are designed for CID (or HCD) and ECD (or ETD) spectra because of the complementarity between them. Less attention has been paid to the use of CID and HCD spectra pairs. In this study, a new de novo peptide sequencing method is proposed for these spectra pairs. This method includes a CID and HCD spectra merging criterion and a parent mass correction step, along with improvements to our previously proposed algorithm for sequencing merged spectra. Three pairs of spectral datasets were used to investigate and compare the performance of the proposed method with other existing methods designed for single spectrum (HCD or CID) sequencing. Experimental results showed that full-length peptide sequencing accuracy was increased significantly by using spectra pairs in the proposed method, with the highest accuracy reaching 81.31%. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Zheng, Jinde; Pan, Haiyang; Yang, Shubao; Cheng, Junsheng
2018-01-01
Multiscale permutation entropy (MPE) is a recently proposed nonlinear dynamic method for measuring the randomness and detecting the nonlinear dynamic change of time series and can be used effectively to extract the nonlinear dynamic fault feature from vibration signals of rolling bearing. To solve the drawback of coarse graining process in MPE, an improved MPE method called generalized composite multiscale permutation entropy (GCMPE) was proposed in this paper. Also the influence of parameters on GCMPE and its comparison with the MPE are studied by analyzing simulation data. GCMPE was applied to the fault feature extraction from vibration signal of rolling bearing and then based on the GCMPE, Laplacian score for feature selection and the Particle swarm optimization based support vector machine, a new fault diagnosis method for rolling bearing was put forward in this paper. Finally, the proposed method was applied to analyze the experimental data of rolling bearing. The analysis results show that the proposed method can effectively realize the fault diagnosis of rolling bearing and has a higher fault recognition rate than the existing methods.
SGFSC: speeding the gene functional similarity calculation based on hash tables.
Tian, Zhen; Wang, Chunyu; Guo, Maozu; Liu, Xiaoyan; Teng, Zhixia
2016-11-04
In recent years, many measures of gene functional similarity have been proposed and widely used in all kinds of essential research. These methods are mainly divided into two categories: pairwise approaches and group-wise approaches. However, a common problem with these methods is their time consumption, especially when measuring the gene functional similarities of a large number of gene pairs. The problem of computational efficiency for pairwise approaches is even more prominent because they are dependent on the combination of semantic similarity. Therefore, the efficient measurement of gene functional similarity remains a challenging problem. To speed current gene functional similarity calculation methods, a novel two-step computing strategy is proposed: (1) establish a hash table for each method to store essential information obtained from the Gene Ontology (GO) graph and (2) measure gene functional similarity based on the corresponding hash table. There is no need to traverse the GO graph repeatedly for each method with the help of the hash table. The analysis of time complexity shows that the computational efficiency of these methods is significantly improved. We also implement a novel Speeding Gene Functional Similarity Calculation tool, namely SGFSC, which is bundled with seven typical measures using our proposed strategy. Further experiments show the great advantage of SGFSC in measuring gene functional similarity on the whole genomic scale. The proposed strategy is successful in speeding current gene functional similarity calculation methods. SGFSC is an efficient tool that is freely available at http://nclab.hit.edu.cn/SGFSC . The source code of SGFSC can be downloaded from http://pan.baidu.com/s/1dFFmvpZ .
NASA Astrophysics Data System (ADS)
Goh, C. P.; Ismail, H.; Yen, K. S.; Ratnam, M. M.
2017-01-01
The incremental digital image correlation (DIC) method has been applied in the past to determine strain in large deformation materials like rubber. This method is, however, prone to cumulative errors since the total displacement is determined by combining the displacements in numerous stages of the deformation. In this work, a method of mapping large strains in rubber using DIC in a single-step without the need for a series of deformation images is proposed. The reference subsets were deformed using deformation factors obtained from the fitted mean stress-axial stretch ratio curve obtained experimentally and the theoretical Poisson function. The deformed reference subsets were then correlated with the deformed image after loading. The recently developed scanner-based digital image correlation (SB-DIC) method was applied on dumbbell rubber specimens to obtain the in-plane displacement fields up to 350% axial strain. Comparison of the mean axial strains determined from the single-step SB-DIC method with those from the incremental SB-DIC method showed an average difference of 4.7%. Two rectangular rubber specimens containing circular and square holes were deformed and analysed using the proposed method. The resultant strain maps from the single-step SB-DIC method were compared with the results of finite element modeling (FEM). The comparison shows that the proposed single-step SB-DIC method can be used to map the strain distribution accurately in large deformation materials like rubber at much shorter time compared to the incremental DIC method.
Contrast-dependent saturation adjustment for outdoor image enhancement.
Wang, Shuhang; Cho, Woon; Jang, Jinbeum; Abidi, Mongi A; Paik, Joonki
2017-01-01
Outdoor images captured in bad-weather conditions usually have poor intensity contrast and color saturation since the light arriving at the camera is severely scattered or attenuated. The task of improving image quality in poor conditions remains a challenge. Existing methods of image quality improvement are usually effective for a small group of images but often fail to produce satisfactory results for a broader variety of images. In this paper, we propose an image enhancement method, which makes it applicable to enhance outdoor images by using content-adaptive contrast improvement as well as contrast-dependent saturation adjustment. The main contribution of this work is twofold: (1) we propose the content-adaptive histogram equalization based on the human visual system to improve the intensity contrast; and (2) we introduce a simple yet effective prior for adjusting the color saturation depending on the intensity contrast. The proposed method is tested with different kinds of images, compared with eight state-of-the-art methods: four enhancement methods and four haze removal methods. Experimental results show the proposed method can more effectively improve the visibility and preserve the naturalness of the images, as opposed to the compared methods.
Robust path planning for flexible needle insertion using Markov decision processes.
Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong
2018-05-11
Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.
Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun
2014-01-01
Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-01-01
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000
Nascimento, Paloma Andrade Martins; Barsanelli, Paulo Lopes; Rebellato, Ana Paula; Pallone, Juliana Azevedo Lima; Colnago, Luiz Alberto; Pereira, Fabíola Manhas Verbi
2017-03-01
This study shows the use of time-domain (TD)-NMR transverse relaxation (T2) data and chemometrics in the nondestructive determination of fat content for powdered food samples such as commercial dried milk products. Most proposed NMR spectroscopy methods for measuring fat content correlate free induction decay or echo intensities with the sample's mass. The need for the sample's mass limits the analytical frequency of NMR determination, because weighing the samples is an additional step in this procedure. Therefore, the method proposed here is based on a multivariate model of T2 decay, measured with Carr-Purcell-Meiboom-Gill pulse sequence and reference values of fat content. The TD-NMR spectroscopy method shows high correlation (r = 0.95) with the lipid content, determined by the standard extraction method of Bligh and Dyer. For comparison, fat content determination was also performed using a multivariate model with near-IR (NIR) spectroscopy, which is also a nondestructive method. The advantages of the proposed TD-NMR method are that it (1) minimizes toxic residue generation, (2) performs measurements with high analytical frequency (a few seconds per analysis), and (3) does not require sample preparation (such as pelleting, needed for NIR spectroscopy analyses) or weighing the samples.
Segmentized Clear Channel Assessment for IEEE 802.15.4 Networks.
Son, Kyou Jung; Hong, Sung Hyeuck; Moon, Seong-Pil; Chang, Tae Gyu; Cho, Hanjin
2016-06-03
This paper proposed segmentized clear channel assessment (CCA) which increases the performance of IEEE 802.15.4 networks by improving carrier sense multiple access with collision avoidance (CSMA/CA). Improving CSMA/CA is important because the low-power consumption feature and throughput performance of IEEE 802.15.4 are greatly affected by CSMA/CA behavior. To improve the performance of CSMA/CA, this paper focused on increasing the chance to transmit a packet by assessing precise channel status. The previous method used in CCA, which is employed by CSMA/CA, assesses the channel by measuring the energy level of the channel. However, this method shows limited channel assessing behavior, which comes from simple threshold dependent channel busy evaluation. The proposed method solves this limited channel decision problem by dividing CCA into two groups. Two groups of CCA compare their energy levels to get precise channel status. To evaluate the performance of the segmentized CCA method, a Markov chain model has been developed. The validation of analytic results is confirmed by comparing them with simulation results. Additionally, simulation results show the proposed method is improving a maximum 8.76% of throughput and decreasing a maximum 3.9% of the average number of CCAs per packet transmission than the IEEE 802.15.4 CCA method.
Segmentized Clear Channel Assessment for IEEE 802.15.4 Networks
Son, Kyou Jung; Hong, Sung Hyeuck; Moon, Seong-Pil; Chang, Tae Gyu; Cho, Hanjin
2016-01-01
This paper proposed segmentized clear channel assessment (CCA) which increases the performance of IEEE 802.15.4 networks by improving carrier sense multiple access with collision avoidance (CSMA/CA). Improving CSMA/CA is important because the low-power consumption feature and throughput performance of IEEE 802.15.4 are greatly affected by CSMA/CA behavior. To improve the performance of CSMA/CA, this paper focused on increasing the chance to transmit a packet by assessing precise channel status. The previous method used in CCA, which is employed by CSMA/CA, assesses the channel by measuring the energy level of the channel. However, this method shows limited channel assessing behavior, which comes from simple threshold dependent channel busy evaluation. The proposed method solves this limited channel decision problem by dividing CCA into two groups. Two groups of CCA compare their energy levels to get precise channel status. To evaluate the performance of the segmentized CCA method, a Markov chain model has been developed. The validation of analytic results is confirmed by comparing them with simulation results. Additionally, simulation results show the proposed method is improving a maximum 8.76% of throughput and decreasing a maximum 3.9% of the average number of CCAs per packet transmission than the IEEE 802.15.4 CCA method. PMID:27271626
An improved conjugate gradient scheme to the solution of least squares SVM.
Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya
2005-03-01
The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.
An information dimension of weighted complex networks
NASA Astrophysics Data System (ADS)
Wen, Tao; Jiang, Wen
2018-07-01
The fractal and self-similarity are important properties in complex networks. Information dimension is a useful dimension for complex networks to reveal these properties. In this paper, an information dimension is proposed for weighted complex networks. Based on the box-covering algorithm for weighted complex networks (BCANw), the proposed method can deal with the weighted complex networks which appear frequently in the real-world, and it can get the influence of the number of nodes in each box on the information dimension. To show the wide scope of information dimension, some applications are illustrated, indicating that the proposed method is effective and feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Yutian; Yang, Fei; Tolbert, Leon M.
With the increased cloud computing and digital information storage, the energy requirement of data centers keeps increasing. A high-voltage point of load (HV POL) with an input series output parallel structure is proposed to convert 400 to 1 VDC within a single stage to increase the power conversion efficiency. The symmetrical controlled half-bridge current doubler is selected as the converter topology in the HV POL. A load-dependent soft-switching method has been proposed with an auxiliary circuit that includes inductor, diode, and MOSFETs so that the hard-switching issue of typical symmetrical controlled half-bridge converters is resolved. The operation principles of themore » proposed soft-switching half-bridge current doubler have been analyzed in detail. Then, the necessity of adjusting the timing with the loading in the proposed method is analyzed based on losses, and a controller is designed to realize the load-dependent operation. A lossless RCD current sensing method is used to sense the output inductor current value in the proposed load-dependent operation. In conclusion, experimental efficiency of a hardware prototype is provided to show that the proposed method can increase the converter's efficiency in both heavy- and light-load conditions.« less
Cui, Yutian; Yang, Fei; Tolbert, Leon M.; ...
2016-06-14
With the increased cloud computing and digital information storage, the energy requirement of data centers keeps increasing. A high-voltage point of load (HV POL) with an input series output parallel structure is proposed to convert 400 to 1 VDC within a single stage to increase the power conversion efficiency. The symmetrical controlled half-bridge current doubler is selected as the converter topology in the HV POL. A load-dependent soft-switching method has been proposed with an auxiliary circuit that includes inductor, diode, and MOSFETs so that the hard-switching issue of typical symmetrical controlled half-bridge converters is resolved. The operation principles of themore » proposed soft-switching half-bridge current doubler have been analyzed in detail. Then, the necessity of adjusting the timing with the loading in the proposed method is analyzed based on losses, and a controller is designed to realize the load-dependent operation. A lossless RCD current sensing method is used to sense the output inductor current value in the proposed load-dependent operation. In conclusion, experimental efficiency of a hardware prototype is provided to show that the proposed method can increase the converter's efficiency in both heavy- and light-load conditions.« less
Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks
Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo
2012-01-01
Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190
Image steganography based on 2k correction and coherent bit length
NASA Astrophysics Data System (ADS)
Sun, Shuliang; Guo, Yongning
2014-10-01
In this paper, a novel algorithm is proposed. Firstly, the edge of cover image is detected with Canny operator and secret data is embedded in edge pixels. Sorting method is used to randomize the edge pixels in order to enhance security. Coherent bit length L is determined by relevant edge pixels. Finally, the method of 2k correction is applied to achieve better imperceptibility in stego image. The experiment shows that the proposed method is better than LSB-3 and Jae-Gil Yu's in PSNR and capacity.
Murakoshi, Kazushi; Mizuno, Junya
2004-11-01
In order to rapidly follow unexpected environmental changes, we propose a parameter control method in reinforcement learning that changes each of learning parameters in appropriate directions. We determine each appropriate direction on the basis of relationships between behaviors and neuromodulators by considering an emergency as a key word. Computer experiments show that the agents using our proposed method could rapidly respond to unexpected environmental changes, not depending on either two reinforcement learning algorithms (Q-learning and actor-critic (AC) architecture) or two learning problems (discontinuous and continuous state-action problems).
Feng, Shuo
2014-01-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420
Feng, Shuo; Ji, Jim
2014-04-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.
NASA Technical Reports Server (NTRS)
Anselmo, Victor J.; Gammell, Paul M.; Clark, Jerry
1987-01-01
Two proposed methods for grading beef quality based on inspection by electronic equipment: one method uses television camera to generate image of a cut of beef as customer sees it; other uses ultrasonics to inspect live animal or unsliced carcasses. Both methods show promise for automated meat inspection.
Robu, Maria R; Edwards, Philip; Ramalhinho, João; Thompson, Stephen; Davidson, Brian; Hawkes, David; Stoyanov, Danail; Clarkson, Matthew J
2017-07-01
Minimally invasive surgery offers advantages over open surgery due to a shorter recovery time, less pain and trauma for the patient. However, inherent challenges such as lack of tactile feedback and difficulty in controlling bleeding lower the percentage of suitable cases. Augmented reality can show a better visualisation of sub-surface structures and tumour locations by fusing pre-operative CT data with real-time laparoscopic video. Such augmented reality visualisation requires a fast and robust video to CT registration that minimises interruption to the surgical procedure. We propose to use view planning for efficient rigid registration. Given the trocar position, a set of camera positions are sampled and scored based on the corresponding liver surface properties. We implement a simulation framework to validate the proof of concept using a segmented CT model from a human patient. Furthermore, we apply the proposed method on clinical data acquired during a human liver resection. The first experiment motivates the viewpoint scoring strategy and investigates reliable liver regions for accurate registrations in an intuitive visualisation. The second experiment shows wider basins of convergence for higher scoring viewpoints. The third experiment shows that a comparable registration performance can be achieved by at least two merged high scoring views and four low scoring views. Hence, the focus could change from the acquisition of a large liver surface to a small number of distinctive patches, thereby giving a more explicit protocol for surface reconstruction. We discuss the application of the proposed method on clinical data and show initial results. The proposed simulation framework shows promising results to motivate more research into a comprehensive view planning method for efficient registration in laparoscopic liver surgery.
Water stress assessment of cork oak leaves and maritime pine needles based on LIF spectra
NASA Astrophysics Data System (ADS)
Lavrov, A.; Utkin, A. B.; Marques da Silva, J.; Vilar, Rui; Santos, N. M.; Alves, B.
2012-02-01
The aim of the present work was to develop a method for the remote assessment of the impact of fire and drought stress on Mediterranean forest species such as the cork oak ( Quercus suber) and maritime pine ( Pinus pinaster). The proposed method is based on laser induced fluorescence (LIF): chlorophyll fluorescence is remotely excited by frequency-doubled YAG:Nd laser radiation pulses and collected and analyzed using a telescope and a gated high sensitivity spectrometer. The plant health criterion used is based on the I 685/ I 740 ratio value, calculated from the fluorescence spectra. The method was benchmarked by comparing the results achieved with those obtained by conventional, continuous excitation fluorometric method and water loss gravimetric measurements. The results obtained with both methods show a strong correlation between them and with the weight-loss measurements, showing that the proposed method is suitable for fire and drought impact assessment on these two species.
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Simple and practical approach for computing the ray Hessian matrix in geometrical optics.
Lin, Psang Dain
2018-02-01
A method is proposed for simplifying the computation of the ray Hessian matrix in geometrical optics by replacing the angular variables in the system variable vector with their equivalent cosine and sine functions. The variable vector of a boundary surface is similarly defined in such a way as to exclude any angular variables. It is shown that the proposed formulations reduce the computation time of the Hessian matrix by around 10 times compared to the previous method reported by the current group in Advanced Geometrical Optics (2016). Notably, the method proposed in this study involves only polynomial differentiation, i.e., trigonometric function calls are not required. As a consequence, the computation complexity is significantly reduced. Five illustrative examples are given. The first three examples show that the proposed method is applicable to the determination of the Hessian matrix for any pose matrix, irrespective of the order in which the rotation and translation motions are specified. The last two examples demonstrate the use of the proposed Hessian matrix in determining the axial and lateral chromatic aberrations of a typical optical system.
Tan, Weng Chun; Mat Isa, Nor Ashidi
2016-01-01
In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.
A temperature compensation methodology for piezoelectric based sensor devices
NASA Astrophysics Data System (ADS)
Wang, Dong F.; Lou, Xueqiao; Bao, Aijian; Yang, Xu; Zhao, Ji
2017-08-01
A temperature compensation methodology comprising a negative temperature coefficient thermistor with the temperature characteristics of a piezoelectric material is proposed to improve the measurement accuracy of piezoelectric sensing based devices. The piezoelectric disk is characterized by using a disk-shaped structure and is also used to verify the effectiveness of the proposed compensation method. The measured output voltage shows a nearly linear relationship with respect to the applied pressure by introducing the proposed temperature compensation method in a temperature range of 25-65 °C. As a result, the maximum measurement accuracy is observed to be improved by 40%, and the higher the temperature, the more effective the method. The effective temperature range of the proposed method is theoretically analyzed by introducing the constant coefficient of the thermistor (B), the resistance of initial temperature (R0), and the paralleled resistance (Rx). The proposed methodology can not only eliminate the influence of piezoelectric temperature dependent characteristics on the sensing accuracy but also decrease the power consumption of piezoelectric sensing based devices by the simplified sensing structure.
CW-SSIM kernel based random forest for image classification
NASA Astrophysics Data System (ADS)
Fan, Guangzhe; Wang, Zhou; Wang, Jiheng
2010-07-01
Complex wavelet structural similarity (CW-SSIM) index has been proposed as a powerful image similarity metric that is robust to translation, scaling and rotation of images, but how to employ it in image classification applications has not been deeply investigated. In this paper, we incorporate CW-SSIM as a kernel function into a random forest learning algorithm. This leads to a novel image classification approach that does not require a feature extraction or dimension reduction stage at the front end. We use hand-written digit recognition as an example to demonstrate our algorithm. We compare the performance of the proposed approach with random forest learning based on other kernels, including the widely adopted Gaussian and the inner product kernels. Empirical evidences show that the proposed method is superior in its classification power. We also compared our proposed approach with the direct random forest method without kernel and the popular kernel-learning method support vector machine. Our test results based on both simulated and realworld data suggest that the proposed approach works superior to traditional methods without the feature selection procedure.
NASA Astrophysics Data System (ADS)
Niu, Xiaofeng; Ye, Hongwei; Xia, Ting; Asma, Evren; Winkler, Mark; Gagnon, Daniel; Wang, Wenli
2015-07-01
Quantitative PET imaging is widely used in clinical diagnosis in oncology and neuroimaging. Accurate normalization correction for the efficiency of each line-of- response is essential for accurate quantitative PET image reconstruction. In this paper, we propose a normalization calibration method by using the delayed-window coincidence events from the scanning phantom or patient. The proposed method could dramatically reduce the ‘ring’ artifacts caused by mismatched system count-rates between the calibration and phantom/patient datasets. Moreover, a modified algorithm for mean detector efficiency estimation is proposed, which could generate crystal efficiency maps with more uniform variance. Both phantom and real patient datasets are used for evaluation. The results show that the proposed method could lead to better uniformity in reconstructed images by removing ring artifacts, and more uniform axial variance profiles, especially around the axial edge slices of the scanner. The proposed method also has the potential benefit to simplify the normalization calibration procedure, since the calibration can be performed using the on-the-fly acquired delayed-window dataset.
Yu, Renping; Zhang, Han; An, Le; Chen, Xiaobo; Wei, Zhihui; Shen, Dinggang
2017-05-01
Brain functional network analysis has shown great potential in understanding brain functions and also in identifying biomarkers for brain diseases, such as Alzheimer's disease (AD) and its early stage, mild cognitive impairment (MCI). In these applications, accurate construction of biologically meaningful brain network is critical. Sparse learning has been widely used for brain network construction; however, its l 1 -norm penalty simply penalizes each edge of a brain network equally, without considering the original connectivity strength which is one of the most important inherent linkwise characters. Besides, based on the similarity of the linkwise connectivity, brain network shows prominent group structure (i.e., a set of edges sharing similar attributes). In this article, we propose a novel brain functional network modeling framework with a "connectivity strength-weighted sparse group constraint." In particular, the network modeling can be optimized by considering both raw connectivity strength and its group structure, without losing the merit of sparsity. Our proposed method is applied to MCI classification, a challenging task for early AD diagnosis. Experimental results based on the resting-state functional MRI, from 50 MCI patients and 49 healthy controls, show that our proposed method is more effective (i.e., achieving a significantly higher classification accuracy, 84.8%) than other competing methods (e.g., sparse representation, accuracy = 65.6%). Post hoc inspection of the informative features further shows more biologically meaningful brain functional connectivities obtained by our proposed method. Hum Brain Mapp 38:2370-2383, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Sliding-mode control combined with improved adaptive feedforward for wafer scanner
NASA Astrophysics Data System (ADS)
Li, Xiaojie; Wang, Yiguang
2018-03-01
In this paper, a sliding-mode control method combined with improved adaptive feedforward is proposed for wafer scanner to improve the tracking performance of the closed-loop system. Particularly, In addition to the inverse model, the nonlinear force ripple effect which may degrade the tracking accuracy of permanent magnet linear motor (PMLM) is considered in the proposed method. The dominant position periodicity of force ripple is determined by using the Fast Fourier Transform (FFT) analysis for experimental data and the improved feedforward control is achieved by the online recursive least-squares (RLS) estimation of the inverse model and the force ripple. The improved adaptive feedforward is given in a general form of nth-order model with force ripple effect. This proposed method is motivated by the motion controller design of the long-stroke PMLM and short-stroke voice coil motor for wafer scanner. The stability of the closed-loop control system and the convergence of the motion tracking are guaranteed by the proposed sliding-mode feedback and adaptive feedforward methods theoretically. Comparative experiments on a precision linear motion platform can verify the correctness and effectiveness of the proposed method. The experimental results show that comparing to traditional method the proposed one has better performance of rapidity and robustness, especially for high speed motion trajectory. And, the improvements on both tracking accuracy and settling time can be achieved.
Hierarchical extraction of urban objects from mobile laser scanning data
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia
2015-01-01
Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.
A shape-based segmentation method for mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen
2013-07-01
Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.
Huang, Wei; Xiao, Liang; Liu, Hongyi; Wei, Zhihui
2015-01-19
Due to the instrumental and imaging optics limitations, it is difficult to acquire high spatial resolution hyperspectral imagery (HSI). Super-resolution (SR) imagery aims at inferring high quality images of a given scene from degraded versions of the same scene. This paper proposes a novel hyperspectral imagery super-resolution (HSI-SR) method via dictionary learning and spatial-spectral regularization. The main contributions of this paper are twofold. First, inspired by the compressive sensing (CS) framework, for learning the high resolution dictionary, we encourage stronger sparsity on image patches and promote smaller coherence between the learned dictionary and sensing matrix. Thus, a sparsity and incoherence restricted dictionary learning method is proposed to achieve higher efficiency sparse representation. Second, a variational regularization model combing a spatial sparsity regularization term and a new local spectral similarity preserving term is proposed to integrate the spectral and spatial-contextual information of the HSI. Experimental results show that the proposed method can effectively recover spatial information and better preserve spectral information. The high spatial resolution HSI reconstructed by the proposed method outperforms reconstructed results by other well-known methods in terms of both objective measurements and visual evaluation.
Semi-Supervised Multi-View Learning for Gene Network Reconstruction
Ceci, Michelangelo; Pio, Gianvito; Kuzmanovski, Vladimir; Džeroski, Sašo
2015-01-01
The task of gene regulatory network reconstruction from high-throughput data is receiving increasing attention in recent years. As a consequence, many inference methods for solving this task have been proposed in the literature. It has been recently observed, however, that no single inference method performs optimally across all datasets. It has also been shown that the integration of predictions from multiple inference methods is more robust and shows high performance across diverse datasets. Inspired by this research, in this paper, we propose a machine learning solution which learns to combine predictions from multiple inference methods. While this approach adds additional complexity to the inference process, we expect it would also carry substantial benefits. These would come from the automatic adaptation to patterns on the outputs of individual inference methods, so that it is possible to identify regulatory interactions more reliably when these patterns occur. This article demonstrates the benefits (in terms of accuracy of the reconstructed networks) of the proposed method, which exploits an iterative, semi-supervised ensemble-based algorithm. The algorithm learns to combine the interactions predicted by many different inference methods in the multi-view learning setting. The empirical evaluation of the proposed algorithm on a prokaryotic model organism (E. coli) and on a eukaryotic model organism (S. cerevisiae) clearly shows improved performance over the state of the art methods. The results indicate that gene regulatory network reconstruction for the real datasets is more difficult for S. cerevisiae than for E. coli. The software, all the datasets used in the experiments and all the results are available for download at the following link: http://figshare.com/articles/Semi_supervised_Multi_View_Learning_for_Gene_Network_Reconstruction/1604827. PMID:26641091
Improvement of spatial resolution in a Timepix based CdTe photon counting detector using ToT method
NASA Astrophysics Data System (ADS)
Park, Kyeongjin; Lee, Daehee; Lim, Kyung Taek; Kim, Giyoon; Chang, Hojong; Yi, Yun; Cho, Gyuseong
2018-05-01
Photon counting detectors (PCDs) have been recognized as potential candidates in X-ray radiography and computed tomography due to their many advantages over conventional energy-integrating detectors. In particular, a PCD-based X-ray system shows an improved contrast-to-noise ratio, reduced radiation exposure dose, and more importantly, exhibits a capability for material decomposition with energy binning. For some applications, a very high resolution is required, which translates into smaller pixel size. Unfortunately, small pixels may suffer from energy spectral distortions (distortion in energy resolution) due to charge sharing effects (CSEs). In this work, we propose a method for correcting CSEs by measuring the point of interaction of an incident X-ray photon by the time-of-threshold (ToT) method. Moreover, we also show that it is possible to obtain an X-ray image with a reduced pixel size by using the concept of virtual pixels at a given pixel size. To verify the proposed method, modulation transfer function (MTF) and signal-to-noise ratio (SNR) measurements were carried out with the Timepix chip combined with the CdTe pixel sensor. The X-ray test condition was set at 80 kVp with 5 μA, and a tungsten edge phantom and a lead line phantom were used for the measurements. Enhanced spatial resolution was achieved by applying the proposed method when compared to that of the conventional photon counting method. From experiment results, MTF increased from 6.3 (conventional counting method) to 8.3 lp/mm (proposed method) at 0.3 MTF. On the other hand, the SNR decreased from 33.08 to 26.85 dB due to four virtual pixels.
Dual-scale Galerkin methods for Darcy flow
NASA Astrophysics Data System (ADS)
Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex
2018-02-01
The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.
Energy-Based Wavelet De-Noising of Hydrologic Time Series
Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu
2014-01-01
De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533
Shen, Jie; Wan, Mi; Shi, Jiafeng
2018-01-01
The surface roughness of roads is an essential road characteristic. Due to the employed carrying platforms (which are often cars), existing measuring methods can only be used for motorable roads. Until now, there has been no effective method for measuring the surface roughness of un-motorable roads, such as pedestrian and bicycle lanes. This hinders many applications related to pedestrians, cyclists and wheelchair users. In recognizing these research gaps, this paper proposes a method for measuring the surface roughness of pedestrian and bicycle lanes based on Global Positioning System (GPS) and accelerometer sensors on bicycle-mounted smartphones. We focus on the International Roughness Index (IRI), as it is the most widely used index for measuring road surface roughness. Specifically, we analyzed a computing model of road surface roughness, derived its parameters with GPS and accelerometers on bicycle-mounted smartphones, and proposed an algorithm to recognize potholes/humps on roads. As a proof of concept, we implemented the proposed method in a mobile application. Three experiments were designed to evaluate the proposed method. The results of the experiments show that the IRI values measured by the proposed method were strongly and positively correlated with those measured by professional instruments. Meanwhile, the proposed algorithm was able to recognize the potholes/humps that the bicycle passed. The proposed method is useful for measuring the surface roughness of roads that are not accessible for professional instruments, such as pedestrian and cycle lanes. This work enables us to further study the feasibility of crowdsourcing road surface roughness with bicycle-mounted smartphones. PMID:29562731
NASA Astrophysics Data System (ADS)
Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Ohtomo, Kuni
2016-03-01
The purpose of this study is to evaluate the feasibility of a novel feature generation, which is based on multiple deep neural networks (DNNs) with boosting, for computer-assisted detection (CADe). It is hard and time-consuming to optimize the hyperparameters for DNNs such as stacked denoising autoencoder (SdA). The proposed method allows using SdA based features without the burden of the hyperparameter setting. The proposed method was evaluated by an application for detecting cerebral aneurysms on magnetic resonance angiogram (MRA). A baseline CADe process included four components; scaling, candidate area limitation, candidate detection, and candidate classification. Proposed feature generation method was applied to extract the optimal features for candidate classification. Proposed method only required setting range of the hyperparameters for SdA. The optimal feature set was selected from a large quantity of SdA based features by multiple SdAs, each of which was trained using different hyperparameter set. The feature selection was operated through ada-boost ensemble learning method. Training of the baseline CADe process and proposed feature generation were operated with 200 MRA cases, and the evaluation was performed with 100 MRA cases. Proposed method successfully provided SdA based features just setting the range of some hyperparameters for SdA. The CADe process by using both previous voxel features and SdA based features had the best performance with 0.838 of an area under ROC curve and 0.312 of ANODE score. The results showed that proposed method was effective in the application for detecting cerebral aneurysms on MRA.
Wang, Xuefeng; Lee, Seunggeun; Zhu, Xiaofeng; Redline, Susan; Lin, Xihong
2013-12-01
Family-based genetic association studies of related individuals provide opportunities to detect genetic variants that complement studies of unrelated individuals. Most statistical methods for family association studies for common variants are single marker based, which test one SNP a time. In this paper, we consider testing the effect of an SNP set, e.g., SNPs in a gene, in family studies, for both continuous and discrete traits. Specifically, we propose a generalized estimating equations (GEEs) based kernel association test, a variance component based testing method, to test for the association between a phenotype and multiple variants in an SNP set jointly using family samples. The proposed approach allows for both continuous and discrete traits, where the correlation among family members is taken into account through the use of an empirical covariance estimator. We derive the theoretical distribution of the proposed statistic under the null and develop analytical methods to calculate the P-values. We also propose an efficient resampling method for correcting for small sample size bias in family studies. The proposed method allows for easily incorporating covariates and SNP-SNP interactions. Simulation studies show that the proposed method properly controls for type I error rates under both random and ascertained sampling schemes in family studies. We demonstrate through simulation studies that our approach has superior performance for association mapping compared to the single marker based minimum P-value GEE test for an SNP-set effect over a range of scenarios. We illustrate the application of the proposed method using data from the Cleveland Family GWAS Study. © 2013 WILEY PERIODICALS, INC.
Learning linear transformations between counting-based and prediction-based word embeddings
Hayashi, Kohei; Kawarabayashi, Ken-ichi
2017-01-01
Despite the growing interest in prediction-based word embedding learning methods, it remains unclear as to how the vector spaces learnt by the prediction-based methods differ from that of the counting-based methods, or whether one can be transformed into the other. To study the relationship between counting-based and prediction-based embeddings, we propose a method for learning a linear transformation between two given sets of word embeddings. Our proposal contributes to the word embedding learning research in three ways: (a) we propose an efficient method to learn a linear transformation between two sets of word embeddings, (b) using the transformation learnt in (a), we empirically show that it is possible to predict distributed word embeddings for novel unseen words, and (c) empirically it is possible to linearly transform counting-based embeddings to prediction-based embeddings, for frequent words, different POS categories, and varying degrees of ambiguities. PMID:28926629
A novel 3D deformation measurement method under optical microscope for micro-scale bulge-test
NASA Astrophysics Data System (ADS)
Wu, Dan; Xie, Huimin
2017-11-01
A micro-scale 3D deformation measurement method combined with optical microscope is proposed in this paper. The method is based on gratings and phase shifting algorithm. By recording the grating images before and after deformation from two symmetrical angles and calculating the phases of the grating patterns, the 3D deformation field of the specimen can be extracted from the phases of the grating patterns. The proposed method was applied to the micro-scale bulge test. A micro-scale thermal/mechanical coupling bulge-test apparatus matched with the super-depth microscope was exploited. With the gratings fabricated onto the film, the deformed morphology of the bulged film was measured reliably. The experimental results show that the proposed method and the exploited bulge-test apparatus can be used to characterize the thermal/mechanical properties of the films at micro-scale successfully.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle.
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-02-26
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-01-01
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions. PMID:28245634
NASA Astrophysics Data System (ADS)
Masuda, Kazuaki; Aiyoshi, Eitaro
We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.
A Study on the Saving Method of Plate Jigs in Hull Block Butt Welding
NASA Astrophysics Data System (ADS)
Ko, Dae-Eun
2017-11-01
A large amount of plate jigs is used for alignment of welding line and control of welding deformations in hull block assembly stage. Besides material cost, the huge working man-hours required for working process of plate jigs is one of the obstacles in productivity growth of shipyard. In this study, analysis method was proposed to simulate the welding deformations of block butt joint with plate jigs setting. Using the proposed analysis method, an example simulation was performed for actual panel block joint to investigate the saving method of plate jigs. Results show that it is possible to achieve two objectives of quality accuracy of the hull block and saving the plate jig usage at the same time by deploying the plate jigs at the right places. And the proposed analysis method can be used in establishing guidelines for the proper use of plate jigs in block assembly stage.
Shang, Songhao
2012-01-01
Crop water requirement is essential for agricultural water management, which is usually available for crop growing stages. However, crop water requirement values of monthly or weekly scales are more useful for water management. A method was proposed to downscale crop coefficient and water requirement from growing stage to substage scales, which is based on the interpolation of accumulated crop and reference evapotranspiration calculated from their values in growing stages. The proposed method was compared with two straightforward methods, that is, direct interpolation of crop evapotranspiration and crop coefficient by assuming that stage average values occurred in the middle of the stage. These methods were tested with a simulated daily crop evapotranspiration series. Results indicate that the proposed method is more reliable, showing that the downscaled crop evapotranspiration series is very close to the simulated ones. PMID:22619572
[Haptic tracking control for minimally invasive robotic surgery].
Xu, Zhaohong; Song, Chengli; Wu, Wenwu
2012-06-01
Haptic feedback plays a significant role in minimally invasive robotic surgery (MIRS). A major deficiency of the current MIRS is the lack of haptic perception for the surgeon, including the commercially available robot da Vinci surgical system. In this paper, a dynamics model of a haptic robot is established based on Newton-Euler method. Because it took some period of time in exact dynamics solution, we used a digital PID arithmetic dependent on robot dynamics to ensure real-time bilateral control, and it could improve tracking precision and real-time control efficiency. To prove the proposed method, an experimental system in which two Novint Falcon haptic devices acting as master-slave system has been developed. Simulations and experiments showed proposed methods could give instrument force feedbacks to operator, and bilateral control strategy is an effective method to master-slave MIRS. The proposed methods could be used to tele-robotic system.
NASA Astrophysics Data System (ADS)
Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.
2018-03-01
A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.
Karasakal, A; Ulu, S T
2014-05-01
A novel, sensitive and selective spectrofluorimetric method was developed for the determination of tamsulosin in spiked human urine and pharmaceutical preparations. The proposed method is based on the reaction of tamsulosin with 1-dimethylaminonaphthalene-5-sulfonyl chloride in carbonate buffer pH 10.5 to yield a highly fluorescent derivative. The described method was validated and the analytical parameters of linearity, limit of detection (LOD), limit of quantification (LOQ), accuracy, precision, recovery and robustness were evaluated. The proposed method showed a linear dependence of the fluorescence intensity on drug concentration over the range 1.22 × 10(-7) to 7.35 × 10(-6) M. LOD and LOQ were calculated as 1.07 × 10(-7) and 3.23 × 10(-7) M, respectively. The proposed method was successfully applied for the determination of tamsulosin in pharmaceutical preparations and the obtained results were in good agreement with those obtained using the reference method. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zheng, Mingfang; He, Cunfu; Lu, Yan; Wu, Bin
2018-01-01
We presented a numerical method to solve phase dispersion curve in general anisotropic plates. This approach involves an exact solution to the problem in the form of the Legendre polynomial of multiple integrals, which we substituted into the state-vector formalism. In order to improve the efficiency of the proposed method, we made a special effort to demonstrate the analytical methodology. Furthermore, we analyzed the algebraic symmetries of the matrices in the state-vector formalism for anisotropic plates. The basic feature of the proposed method was the expansion of field quantities by Legendre polynomials. The Legendre polynomial method avoid to solve the transcendental dispersion equation, which can only be solved numerically. This state-vector formalism combined with Legendre polynomial expansion distinguished the adjacent dispersion mode clearly, even when the modes were very close. We then illustrated the theoretical solutions of the dispersion curves by this method for isotropic and anisotropic plates. Finally, we compared the proposed method with the global matrix method (GMM), which shows excellent agreement.
Shrestha, Rojeet; Miura, Yusuke; Hirano, Ken-Ichi; Chen, Zhen; Okabe, Hiroaki; Chiba, Hitoshi; Hui, Shu-Ping
2018-01-01
Fatty acid (FA) profiling of milk has important applications in human health and nutrition. Conventional methods for the saponification and derivatization of FA are time-consuming and laborious. We aimed to develop a simple, rapid, and economical method for the determination of FA in milk. We applied a beneficial approach of microwave-assisted saponification (MAS) of milk fats and microwave-assisted derivatization (MAD) of FA to its hydrazides, integrated with HPLC-based analysis. The optimal conditions for MAS and MAD were determined. Microwave irradiation significantly reduced the sample preparation time from 80 min in the conventional method to less than 3 min. We used three internal standards for the measurement of short-, medium- and long-chain FA. The proposed method showed satisfactory analytical sensitivity, recovery and reproducibility. There was a significant correlation in the milk FA concentrations between the proposed and conventional methods. Being quick, economic, and convenient, the proposed method for the milk FA measurement can be substitute for the convention method.
A novel method for unsteady flow field segmentation based on stochastic similarity of direction
NASA Astrophysics Data System (ADS)
Omata, Noriyasu; Shirayama, Susumu
2018-04-01
Recent developments in fluid dynamics research have opened up the possibility for the detailed quantitative understanding of unsteady flow fields. However, the visualization techniques currently in use generally provide only qualitative insights. A method for dividing the flow field into physically relevant regions of interest can help researchers quantify unsteady fluid behaviors. Most methods at present compare the trajectories of virtual Lagrangian particles. The time-invariant features of an unsteady flow are also frequently of interest, but the Lagrangian specification only reveals time-variant features. To address these challenges, we propose a novel method for the time-invariant spatial segmentation of an unsteady flow field. This segmentation method does not require Lagrangian particle tracking but instead quantitatively compares the stochastic models of the direction of the flow at each observed point. The proposed method is validated with several clustering tests for 3D flows past a sphere. Results show that the proposed method reveals the time-invariant, physically relevant structures of an unsteady flow.
Spatial resolution properties of motion-compensated tomographic image reconstruction methods.
Chun, Se Young; Fessler, Jeffrey A
2012-07-01
Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.
Line segment confidence region-based string matching method for map conflation
NASA Astrophysics Data System (ADS)
Huh, Yong; Yang, Sungchul; Ga, Chillo; Yu, Kiyun; Shi, Wenzhong
2013-04-01
In this paper, a method to detect corresponding point pairs between polygon object pairs with a string matching method based on a confidence region model of a line segment is proposed. The optimal point edit sequence to convert the contour of a target object into that of a reference object was found by the string matching method which minimizes its total error cost, and the corresponding point pairs were derived from the edit sequence. Because a significant amount of apparent positional discrepancies between corresponding objects are caused by spatial uncertainty and their confidence region models of line segments are therefore used in the above matching process, the proposed method obtained a high F-measure for finding matching pairs. We applied this method for built-up area polygon objects in a cadastral map and a topographical map. Regardless of their different mapping and representation rules and spatial uncertainties, the proposed method with a confidence level at 0.95 showed a matching result with an F-measure of 0.894.
Finger Vein Recognition Based on a Personalized Best Bit Map
Yang, Gongping; Xi, Xiaoming; Yin, Yilong
2012-01-01
Finger vein patterns have recently been recognized as an effective biometric identifier. In this paper, we propose a finger vein recognition method based on a personalized best bit map (PBBM). Our method is rooted in a local binary pattern based method and then inclined to use the best bits only for matching. We first present the concept of PBBM and the generating algorithm. Then we propose the finger vein recognition framework, which consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PBBM achieves not only better performance, but also high robustness and reliability. In addition, PBBM can be used as a general framework for binary pattern based recognition. PMID:22438735
Semantic Information Extraction of Lanes Based on Onboard Camera Videos
NASA Astrophysics Data System (ADS)
Tang, L.; Deng, T.; Ren, C.
2018-04-01
In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.
Chiba, Shuntaro; Ikeda, Kazuyoshi; Ishida, Takashi; Gromiha, M Michael; Taguchi, Y-H; Iwadate, Mitsuo; Umeyama, Hideaki; Hsin, Kun-Yi; Kitano, Hiroaki; Yamamoto, Kazuki; Sugaya, Nobuyoshi; Kato, Koya; Okuno, Tatsuya; Chikenji, George; Mochizuki, Masahiro; Yasuo, Nobuaki; Yoshino, Ryunosuke; Yanagisawa, Keisuke; Ban, Tomohiro; Teramoto, Reiji; Ramakrishnan, Chandrasekaran; Thangakani, A Mary; Velmurugan, D; Prathipati, Philip; Ito, Junichi; Tsuchiya, Yuko; Mizuguchi, Kenji; Honma, Teruki; Hirokawa, Takatsugu; Akiyama, Yutaka; Sekijima, Masakazu
2015-11-26
A search of broader range of chemical space is important for drug discovery. Different methods of computer-aided drug discovery (CADD) are known to propose compounds in different chemical spaces as hit molecules for the same target protein. This study aimed at using multiple CADD methods through open innovation to achieve a level of hit molecule diversity that is not achievable with any particular single method. We held a compound proposal contest, in which multiple research groups participated and predicted inhibitors of tyrosine-protein kinase Yes. This showed whether collective knowledge based on individual approaches helped to obtain hit compounds from a broad range of chemical space and whether the contest-based approach was effective.
Gas Classification Using Deep Convolutional Neural Networks.
Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin
2018-01-08
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).
Gas Classification Using Deep Convolutional Neural Networks
Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin
2018-01-01
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723
Finger vein recognition based on a personalized best bit map.
Yang, Gongping; Xi, Xiaoming; Yin, Yilong
2012-01-01
Finger vein patterns have recently been recognized as an effective biometric identifier. In this paper, we propose a finger vein recognition method based on a personalized best bit map (PBBM). Our method is rooted in a local binary pattern based method and then inclined to use the best bits only for matching. We first present the concept of PBBM and the generating algorithm. Then we propose the finger vein recognition framework, which consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PBBM achieves not only better performance, but also high robustness and reliability. In addition, PBBM can be used as a general framework for binary pattern based recognition.
Pixel-based absolute surface metrology by three flat test with shifted and rotated maps
NASA Astrophysics Data System (ADS)
Zhai, Dede; Chen, Shanyong; Xue, Shuai; Yin, Ziqiang
2018-03-01
In traditional three flat test, it only provides the absolute profile along one surface diameter. In this paper, an absolute testing algorithm based on shift-rotation with three flat test has been proposed to reconstruct two-dimensional surface exactly. Pitch and yaw error during shift procedure is analyzed and compensated in our method. Compared with multi-rotation method proposed before, it only needs a 90° rotation and a shift, which is easy to carry out especially in condition of large size surface. It allows pixel level spatial resolution to be achieved without interpolation or assumption to the test surface. In addition, numerical simulations and optical tests are implemented and show the high accuracy recovery capability of the proposed method.
Chiba, Shuntaro; Ikeda, Kazuyoshi; Ishida, Takashi; Gromiha, M. Michael; Taguchi, Y-h.; Iwadate, Mitsuo; Umeyama, Hideaki; Hsin, Kun-Yi; Kitano, Hiroaki; Yamamoto, Kazuki; Sugaya, Nobuyoshi; Kato, Koya; Okuno, Tatsuya; Chikenji, George; Mochizuki, Masahiro; Yasuo, Nobuaki; Yoshino, Ryunosuke; Yanagisawa, Keisuke; Ban, Tomohiro; Teramoto, Reiji; Ramakrishnan, Chandrasekaran; Thangakani, A. Mary; Velmurugan, D.; Prathipati, Philip; Ito, Junichi; Tsuchiya, Yuko; Mizuguchi, Kenji; Honma, Teruki; Hirokawa, Takatsugu; Akiyama, Yutaka; Sekijima, Masakazu
2015-01-01
A search of broader range of chemical space is important for drug discovery. Different methods of computer-aided drug discovery (CADD) are known to propose compounds in different chemical spaces as hit molecules for the same target protein. This study aimed at using multiple CADD methods through open innovation to achieve a level of hit molecule diversity that is not achievable with any particular single method. We held a compound proposal contest, in which multiple research groups participated and predicted inhibitors of tyrosine-protein kinase Yes. This showed whether collective knowledge based on individual approaches helped to obtain hit compounds from a broad range of chemical space and whether the contest-based approach was effective. PMID:26607293
Research on camera on orbit radial calibration based on black body and infrared calibration stars
NASA Astrophysics Data System (ADS)
Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng
2018-05-01
Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.
Delaunay Triangulation as a New Coverage Measurement Method in Wireless Sensor Network
Chizari, Hassan; Hosseini, Majid; Poston, Timothy; Razak, Shukor Abd; Abdullah, Abdul Hanan
2011-01-01
Sensing and communication coverage are among the most important trade-offs in Wireless Sensor Network (WSN) design. A minimum bound of sensing coverage is vital in scheduling, target tracking and redeployment phases, as well as providing communication coverage. Some methods measure the coverage as a percentage value, but detailed information has been missing. Two scenarios with equal coverage percentage may not have the same Quality of Coverage (QoC). In this paper, we propose a new coverage measurement method using Delaunay Triangulation (DT). This can provide the value for all coverage measurement tools. Moreover, it categorizes sensors as ‘fat’, ‘healthy’ or ‘thin’ to show the dense, optimal and scattered areas. It can also yield the largest empty area of sensors in the field. Simulation results show that the proposed DT method can achieve accurate coverage information, and provides many tools to compare QoC between different scenarios. PMID:22163792
An efficient unstructured WENO method for supersonic reactive flows
NASA Astrophysics Data System (ADS)
Zhao, Wen-Geng; Zheng, Hong-Wei; Liu, Feng-Jun; Shi, Xiao-Tian; Gao, Jun; Hu, Ning; Lv, Meng; Chen, Si-Cong; Zhao, Hong-Da
2018-03-01
An efficient high-order numerical method for supersonic reactive flows is proposed in this article. The reactive source term and convection term are solved separately by splitting scheme. In the reaction step, an adaptive time-step method is presented, which can improve the efficiency greatly. In the convection step, a third-order accurate weighted essentially non-oscillatory (WENO) method is adopted to reconstruct the solution in the unstructured grids. Numerical results show that our new method can capture the correct propagation speed of the detonation wave exactly even in coarse grids, while high order accuracy can be achieved in the smooth region. In addition, the proposed adaptive splitting method can reduce the computational cost greatly compared with the traditional splitting method.
Spectra library assisted de novo peptide sequencing for HCD and ETD spectra pairs.
Yan, Yan; Zhang, Kaizhong
2016-12-23
De novo peptide sequencing via tandem mass spectrometry (MS/MS) has been developed rapidly in recent years. With the use of spectra pairs from the same peptide under different fragmentation modes, performance of de novo sequencing is greatly improved. Currently, with large amount of spectra sequenced everyday, spectra libraries containing tens of thousands of annotated experimental MS/MS spectra become available. These libraries provide information of the spectra properties, thus have the potential to be used with de novo sequencing to improve its performance. In this study, an improved de novo sequencing method assisted with spectra library is proposed. It uses spectra libraries as training datasets and introduces significant scores of the features used in our previous de novo sequencing method for HCD and ETD spectra pairs. Two pairs of HCD and ETD spectral datasets were used to test the performance of the proposed method and our previous method. The results show that this proposed method achieves better sequencing accuracy with higher ranked correct sequences and less computational time. This paper proposed an advanced de novo sequencing method for HCD and ETD spectra pair and used information from spectra libraries and significant improved previous similar methods.
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
Learning locality preserving graph from data.
Zhang, Yan-Ming; Huang, Kaizhu; Hou, Xinwen; Liu, Cheng-Lin
2014-11-01
Machine learning based on graph representation, or manifold learning, has attracted great interest in recent years. As the discrete approximation of data manifold, the graph plays a crucial role in these kinds of learning approaches. In this paper, we propose a novel learning method for graph construction, which is distinct from previous methods in that it solves an optimization problem with the aim of directly preserving the local information of the original data set. We show that the proposed objective has close connections with the popular Laplacian Eigenmap problem, and is hence well justified. The optimization turns out to be a quadratic programming problem with n(n-1)/2 variables (n is the number of data points). Exploiting the sparsity of the graph, we further propose a more efficient cutting plane algorithm to solve the problem, making the method better scalable in practice. In the context of clustering and semi-supervised learning, we demonstrated the advantages of our proposed method by experiments.
Shareef, Hussain; Mohamed, Azah
2017-01-01
The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method. PMID:29220396
Islam, Md Mainul; Shareef, Hussain; Mohamed, Azah
2017-01-01
The electric vehicle (EV) is considered a premium solution to global warming and various types of pollution. Nonetheless, a key concern is the recharging of EV batteries. Therefore, this study proposes a novel approach that considers the costs of transportation loss, buildup, and substation energy loss and that incorporates harmonic power loss into optimal rapid charging station (RCS) planning. A novel optimization technique, called binary lightning search algorithm (BLSA), is proposed to solve the optimization problem. BLSA is also applied to a conventional RCS planning method. A comprehensive analysis is conducted to assess the performance of the two RCS planning methods by using the IEEE 34-bus test system as the power grid. The comparative studies show that the proposed BLSA is better than other optimization techniques. The daily total cost in RCS planning of the proposed method, including harmonic power loss, decreases by 10% compared with that of the conventional method.
Smartphone-Based Accurate Analysis of Retinal Vasculature towards Point-of-Care Diagnostics
Xu, Xiayu; Ding, Wenxiang; Wang, Xuemin; Cao, Ruofan; Zhang, Maiye; Lv, Peilin; Xu, Feng
2016-01-01
Retinal vasculature analysis is important for the early diagnostics of various eye and systemic diseases, making it a potentially useful biomarker, especially for resource-limited regions and countries. Here we developed a smartphone-based retinal image analysis system for point-of-care diagnostics that is able to load a fundus image, segment retinal vessels, analyze individual vessel width, and store or uplink results. The proposed system was not only evaluated on widely used public databases and compared with the state-of-the-art methods, but also validated on clinical images directly acquired with a smartphone. An Android app is also developed to facilitate on-site application of the proposed methods. Both visual assessment and quantitative assessment showed that the proposed methods achieved comparable results to the state-of-the-art methods that require high-standard workstations. The proposed system holds great potential for the early diagnostics of various diseases, such as diabetic retinopathy, for resource-limited regions and countries. PMID:27698369
EEG-Based Computer Aided Diagnosis of Autism Spectrum Disorder Using Wavelet, Entropy, and ANN
AlSharabi, Khalil; Ibrahim, Sutrisno; Alsuwailem, Abdullah
2017-01-01
Autism spectrum disorder (ASD) is a type of neurodevelopmental disorder with core impairments in the social relationships, communication, imagination, or flexibility of thought and restricted repertoire of activity and interest. In this work, a new computer aided diagnosis (CAD) of autism based on electroencephalography (EEG) signal analysis is investigated. The proposed method is based on discrete wavelet transform (DWT), entropy (En), and artificial neural network (ANN). DWT is used to decompose EEG signals into approximation and details coefficients to obtain EEG subbands. The feature vector is constructed by computing Shannon entropy values from each EEG subband. ANN classifies the corresponding EEG signal into normal or autistic based on the extracted features. The experimental results show the effectiveness of the proposed method for assisting autism diagnosis. A receiver operating characteristic (ROC) curve metric is used to quantify the performance of the proposed method. The proposed method obtained promising results tested using real dataset provided by King Abdulaziz Hospital, Jeddah, Saudi Arabia. PMID:28484720
Efficiently detecting outlying behavior in video-game players.
Kim, Young Bin; Kang, Shin Jin; Lee, Sang Hyeok; Jung, Jang Young; Kam, Hyeong Ryeol; Lee, Jung; Kim, Young Sun; Lee, Joonsoo; Kim, Chang Hun
2015-01-01
In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players' characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players (i.e., data regarding adjustments to the volume and the use of the keyboard and mouse) was used to analyze high-dimensional game-player data. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. The proposed method can also be used for feedback analysis of various interactive content provided in PC environments.
Research of infrared laser based pavement imaging and crack detection
NASA Astrophysics Data System (ADS)
Hong, Hanyu; Wang, Shu; Zhang, Xiuhua; Jing, Genqiang
2013-08-01
Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows that the proposed automatic extraction method is effective and advanced.
NASA Astrophysics Data System (ADS)
Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu
2015-03-01
In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.
Efficiently detecting outlying behavior in video-game players
Kim, Young Bin; Kang, Shin Jin; Lee, Sang Hyeok; Jung, Jang Young; Kam, Hyeong Ryeol; Lee, Jung; Kim, Young Sun; Lee, Joonsoo
2015-01-01
In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players’ characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players (i.e., data regarding adjustments to the volume and the use of the keyboard and mouse) was used to analyze high-dimensional game-player data. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. The proposed method can also be used for feedback analysis of various interactive content provided in PC environments. PMID:26713250
Optimization of Gear Ratio in the Tidal Current Generation System based on Generated Energy
NASA Astrophysics Data System (ADS)
Naoi, Kazuhisa; Shiono, Mitsuhiro; Suzuki, Katsuyuki
It is possible to predict generating power of the tidal current generation, because of the tidal current's periodicity. Tidal current generation is more advantageous than other renewable energy sources, when the tidal current generation system is connected to the power system and operated. In this paper, we propose a method used to optimize the gear ratio and generator capacity, that is fundamental design items in the tidal current generation system which is composed of Darrieus type water turbine and squirrel-cage induction generator coupled with gear. The proposed method is applied to the tidal current generation system including the most large-sized turbine that we have developed and studied. This paper shows optimum gear ratio and generator capacity that make generated energy maximum, and verify effectiveness of the proposed method. The paper also proposes a method of selecting maximum generating current velocity in order to reduce the generator capacity, from the viewpoint of economics.
Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
NASA Astrophysics Data System (ADS)
Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong
2008-12-01
Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
Suppressing multiples using an adaptive multichannel filter based on L1-norm
NASA Astrophysics Data System (ADS)
Shi, Ying; Jing, Hongliang; Zhang, Wenwu; Ning, Dezhi
2017-08-01
Adaptive subtraction is an important link for removing surface-related multiples in the wave equation-based method. In this paper, we propose an adaptive multichannel subtraction method based on the L1-norm. We achieve enhanced compensation for the mismatch between the input seismogram and the predicted multiples in terms of the amplitude, phase, frequency band, and travel time. Unlike the conventional L2-norm, the proposed method does not rely on the assumption that the primary and the multiples are orthogonal, and also takes advantage of the fact that the L1-norm is more robust when dealing with outliers. In addition, we propose a frequency band extension via modulation to reconstruct the high frequencies to compensate for the frequency misalignment. We present a parallel computing scheme to accelerate the subtraction algorithm on graphic processing units (GPUs), which significantly reduces the computational cost. The synthetic and field seismic data tests show that the proposed method effectively suppresses the multiples.
NASA Astrophysics Data System (ADS)
Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing
2017-11-01
Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.
NASA Astrophysics Data System (ADS)
Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.
2005-05-01
Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.
Multiagent scheduling method with earliness and tardiness objectives in flexible job shops.
Wu, Zuobao; Weng, Michael X
2005-04-01
Flexible job-shop scheduling problems are an important extension of the classical job-shop scheduling problems and present additional complexity. Such problems are mainly due to the existence of a considerable amount of overlapping capacities with modern machines. Classical scheduling methods are generally incapable of addressing such capacity overlapping. We propose a multiagent scheduling method with job earliness and tardiness objectives in a flexible job-shop environment. The earliness and tardiness objectives are consistent with the just-in-time production philosophy which has attracted significant attention in both industry and academic community. A new job-routing and sequencing mechanism is proposed. In this mechanism, two kinds of jobs are defined to distinguish jobs with one operation left from jobs with more than one operation left. Different criteria are proposed to route these two kinds of jobs. Job sequencing enables to hold a job that may be completed too early. Two heuristic algorithms for job sequencing are developed to deal with these two kinds of jobs. The computational experiments show that the proposed multiagent scheduling method significantly outperforms the existing scheduling methods in the literature. In addition, the proposed method is quite fast. In fact, the simulation time to find a complete schedule with over 2000 jobs on ten machines is less than 1.5 min.
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Multi-task feature selection in microarray data by binary integer programming.
Lan, Liang; Vucetic, Slobodan
2013-12-20
A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.
Aveiro method in reproducing kernel Hilbert spaces under complete dictionary
NASA Astrophysics Data System (ADS)
Mai, Weixiong; Qian, Tao
2017-12-01
Aveiro Method is a sparse representation method in reproducing kernel Hilbert spaces (RKHS) that gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying RKHS. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro Method. To avoid those difficulties we propose an anew Aveiro Method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so called Pre-Orthogonal Greedy Algorithm (P-OGA) involving completion of a given dictionary. The new method is called Aveiro Method Under Complete Dictionary (AMUCD). The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition, bring available for the classical Hardy and Paley-Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro Method and the greedy algorithm.
Traffic speed data imputation method based on tensor completion.
Ran, Bin; Tan, Huachun; Feng, Jianshuai; Liu, Ying; Wang, Wuhong
2015-01-01
Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.
Traffic Speed Data Imputation Method Based on Tensor Completion
Ran, Bin; Feng, Jianshuai; Liu, Ying; Wang, Wuhong
2015-01-01
Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches. PMID:25866501
Robust PV Degradation Methodology and Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Dirk; Deline, Christopher A; Kurtz, Sarah
The degradation rate plays an important role in predicting and assessing the long-term energy generation of PV systems. Many methods have been proposed for extracting the degradation rate from operational data of PV systems, but most of the published approaches are susceptible to bias due to inverter clipping, module soiling, temporary outages, seasonality, and sensor degradation. In this manuscript, we propose a methodology for determining PV degradation leveraging available modeled clear-sky irradiance data rather than site sensor data, and a robust year-over-year (YOY) rate calculation. We show the method to provide reliable degradation rate estimates even in the case ofmore » sensor drift, data shifts, and soiling. Compared with alternate methods, we demonstrate that the proposed method delivers the lowest uncertainty in degradation rate estimates for a fleet of 486 PV systems.« less
Robust PV Degradation Methodology and Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Dirk C.; Deline, Chris; Kurtz, Sarah R.
The degradation rate plays an important role in predicting and assessing the long-term energy generation of photovoltaics (PV) systems. Many methods have been proposed for extracting the degradation rate from operational data of PV systems, but most of the published approaches are susceptible to bias due to inverter clipping, module soiling, temporary outages, seasonality, and sensor degradation. In this paper, we propose a methodology for determining PV degradation leveraging available modeled clear-sky irradiance data rather than site sensor data, and a robust year-over-year rate calculation. We show the method to provide reliable degradation rate estimates even in the case ofmore » sensor drift, data shifts, and soiling. Compared with alternate methods, we demonstrate that the proposed method delivers the lowest uncertainty in degradation rate estimates for a fleet of 486 PV systems.« less
Barazandeh Tehrani, Maliheh; Namadchian, Melika; Fadaye Vatan, Sedigheh; Souri, Effat
2013-04-10
A derivative spectrophotometric method was proposed for the simultaneous determination of clindamycin and tretinoin in pharmaceutical dosage forms. The measurement was achieved using the first and second derivative signals of clindamycin at (1D) 251 nm and (2D) 239 nm and tretinoin at (1D) 364 nm and (2D) 387 nm.The proposed method showed excellent linearity at both first and second derivative order in the range of 60-1200 and 1.25-25 μg/ml for clindamycin phosphate and tretinoin respectively. The within-day and between-day precision and accuracy was in acceptable range (CV<3.81%, error<3.20%). Good agreement between the found andadded concentrations indicates successful application of the proposed method for simultaneous determination of clindamycin and tretinoin in synthetic mixtures and pharmaceutical dosage form.
Robust PV Degradation Methodology and Application
Jordan, Dirk C.; Deline, Chris; Kurtz, Sarah R.; ...
2017-12-21
The degradation rate plays an important role in predicting and assessing the long-term energy generation of photovoltaics (PV) systems. Many methods have been proposed for extracting the degradation rate from operational data of PV systems, but most of the published approaches are susceptible to bias due to inverter clipping, module soiling, temporary outages, seasonality, and sensor degradation. In this paper, we propose a methodology for determining PV degradation leveraging available modeled clear-sky irradiance data rather than site sensor data, and a robust year-over-year rate calculation. We show the method to provide reliable degradation rate estimates even in the case ofmore » sensor drift, data shifts, and soiling. Compared with alternate methods, we demonstrate that the proposed method delivers the lowest uncertainty in degradation rate estimates for a fleet of 486 PV systems.« less
Joint sparse representation for robust multimodal biometrics recognition.
Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama
2014-01-01
Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.