Sample records for proposed method consists

  1. Diagnosing a Strong-Fault Model by Conflict and Consistency

    PubMed Central

    Zhou, Gan; Feng, Wenquan

    2018-01-01

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302

  2. Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    DOE PAGES

    Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.

    2016-01-21

    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.

  3. A detail-preserved and luminance-consistent multi-exposure image fusion algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Guanquan; Zhou, Yue

    2018-04-01

    When irradiance across a scene varies greatly, we can hardly get an image of the scene without over- or underexposure area, because of the constraints of cameras. Multi-exposure image fusion (MEF) is an effective method to deal with this problem by fusing multi-exposure images of a static scene. A novel MEF method is described in this paper. In the proposed algorithm, coarser-scale luminance consistency is preserved by contribution adjustment using the luminance information between blocks; detail-preserved smoothing filter can stitch blocks smoothly without losing details. Experiment results show that the proposed method performs well in preserving luminance consistency and details.

  4. An algebraic method for constructing stable and consistent autoregressive filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu

    2015-02-15

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less

  5. Predicting Consistency of Meningioma by Magnetic Resonance Imaging

    PubMed Central

    Smith, Kyle A.; Leever, John D.; Chamoun, Roukoz B.

    2015-01-01

    Objective Meningioma consistency is important because it affects the difficulty of surgery. To predict preoperative consistency, several methods have been proposed; however, they lack objectivity and reproducibility. We propose a new method for prediction based on tumor to cerebellar peduncle T2-weighted imaging intensity (TCTI) ratios. Design The magnetic resonance (MR) images of 20 consecutive patients were evaluated preoperatively. An intraoperative consistency scale was applied to these lesions prospectively by the operating surgeon based on Cavitron Ultrasonic Surgical Aspirator (Valleylab, Boulder, Colorado, United States) intensity. Tumors were classified as A, very soft; B, soft/intermediate; or C, fibrous. Using T2-weighted MR sequence, the TCTI ratio was calculated. Tumor consistency grades and TCTI ratios were then correlated. Results Of the 20 tumors evaluated prospectively, 7 were classified as very soft, 9 as soft/intermediate, and 4 as fibrous. TCTI ratios for fibrous tumors were all ≤ 1; very soft tumors were ≥ 1.8, except for one outlier of 1.66; and soft/intermediate tumors were > 1 to < 1.8. Conclusion We propose a method using quantifiable region-of-interest TCTIs as a uniform and reproducible way to predict tumor consistency. The intraoperative consistency was graded in an objective and clinically significant way and could lead to more efficient tumor resection. PMID:26225306

  6. A Data Cleaning Method for Big Trace Data Using Movement Consistency

    PubMed Central

    Tang, Luliang; Zhang, Xia; Li, Qingquan

    2018-01-01

    Given the popularization of GPS technologies, the massive amount of spatiotemporal GPS traces collected by vehicles are becoming a new kind of big data source for urban geographic information extraction. The growing volume of the dataset, however, creates processing and management difficulties, while the low quality generates uncertainties when investigating human activities. Based on the conception of the error distribution law and position accuracy of the GPS data, we propose in this paper a data cleaning method for this kind of spatial big data using movement consistency. First, a trajectory is partitioned into a set of sub-trajectories using the movement characteristic points. In this process, GPS points indicate that the motion status of the vehicle has transformed from one state into another, and are regarded as the movement characteristic points. Then, GPS data are cleaned based on the similarities of GPS points and the movement consistency model of the sub-trajectory. The movement consistency model is built using the random sample consensus algorithm based on the high spatial consistency of high-quality GPS data. The proposed method is evaluated based on extensive experiments, using GPS trajectories generated by a sample of vehicles over a 7-day period in Wuhan city, China. The results show the effectiveness and efficiency of the proposed method. PMID:29522456

  7. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  8. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  9. Gas Classification Using Deep Convolutional Neural Networks.

    PubMed

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  10. Gas Classification Using Deep Convolutional Neural Networks

    PubMed Central

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  11. Dose-volume histogram prediction using density estimation.

    PubMed

    Skarpman Munter, Johanna; Sjölund, Jens

    2015-09-07

    Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.

  12. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  13. A Chaotic Ordered Hierarchies Consistency Analysis Performance Evaluation Model

    NASA Astrophysics Data System (ADS)

    Yeh, Wei-Chang

    2013-02-01

    The Hierarchies Consistency Analysis (HCA) is proposed by Guh in-cooperated along with some case study on a Resort to reinforce the weakness of Analytical Hierarchy Process (AHP). Although the results obtained enabled aid for the Decision Maker to make more reasonable and rational verdicts, the HCA itself is flawed. In this paper, our objective is to indicate the problems of HCA, and then propose a revised method called chaotic ordered HCA (COH in short) which can avoid problems. Since the COH is based upon Guh's method, the Decision Maker establishes decisions in a way similar to that of the original method.

  14. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    PubMed

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  15. A review method for UML requirements analysis model employing system-side prototyping.

    PubMed

    Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    User interface prototyping is an effective method for users to validate the requirements defined by analysts at an early stage of a software development. However, a user interface prototype system offers weak support for the analysts to verify the consistency of the specifications about internal aspects of a system such as business logic. As the result, the inconsistency causes a lot of rework costs because the inconsistency often makes the developers impossible to actualize the system based on the specifications. For verifying such consistency, functional prototyping is an effective method for the analysts, but it needs a lot of costs and more detailed specifications. In this paper, we propose a review method so that analysts can verify the consistency among several different kinds of diagrams in UML efficiently by employing system-side prototyping without the detailed model. The system-side prototype system does not have any functions to achieve business logic, but visualizes the results of the integration among the diagrams in UML as Web pages. The usefulness of our proposal was evaluated by applying our proposal into a development of Library Management System (LMS) for a laboratory. This development was conducted by a group. As the result, our proposal was useful for discovering the serious inconsistency caused by the misunderstanding among the members of the group.

  16. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  17. Automated 3D Ultrasound Image Segmentation to Aid Breast Cancer Image Interpretation

    PubMed Central

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2015-01-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. PMID:26547117

  18. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    NASA Astrophysics Data System (ADS)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  19. Decision feedback equalizer for holographic data storage.

    PubMed

    Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo

    2018-05-20

    Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.

  20. An enhanced multi-view vertical line locus matching algorithm of object space ground primitives based on positioning consistency for aerial and space images

    NASA Astrophysics Data System (ADS)

    Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia

    2018-05-01

    The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.

  1. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  2. Branch-Based Model for the Diameters of the Pulmonary Airways: Accounting for Departures From Self-Consistency and Registration Errors

    PubMed Central

    Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.; Glenny, Robb W.; Minard, Kevin R.; Carson, James P.; Jiao, Xiangmin; Jacob, Richard E.; Cox, Timothy C.; Postlethwait, Edward M.; Corley, Richard A.

    2017-01-01

    We examine a previously published branch-based approach for modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that take account of error. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys, and one ozone-exposed monkey. Our results showed substantial departures from self-consistency in all five subjects. When departures from self-consistency exist, we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. The new variance model can be used instead. Measurement error has an important impact on the estimated morphometry models and needs to be addressed in the analysis. PMID:22528468

  3. Adaptive identification of vessel's added moments of inertia with program motion

    NASA Astrophysics Data System (ADS)

    Alyshev, A. S.; Melnikov, V. G.

    2018-05-01

    In this paper, we propose a new experimental method for determining the moments of inertia of the ship model. The paper gives a brief review of existing methods, a description of the proposed method and experimental stand, test procedures and calculation formulas and experimental results. The proposed method is based on the energy approach with special program motions. The ship model is fixed in a special rack consisting of a torsion element and a set of additional servo drives with flywheels (reactive wheels), which correct the motion. The servo drives with an adaptive controller provide the symmetry of the motion, which is necessary for the proposed identification procedure. The effectiveness of the proposed approach is confirmed by experimental results.

  4. Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.

    PubMed

    Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo

    2017-12-01

    The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.

  5. Enhancement of low visibility aerial images using histogram truncation and an explicit Retinex representation for balancing contrast and color consistency

    NASA Astrophysics Data System (ADS)

    Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup

    2017-06-01

    This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.

  6. Analysis of full disc Ca II K spectroheliograms. I. Photometric calibration and centre-to-limb variation compensation

    NASA Astrophysics Data System (ADS)

    Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.

    2018-01-01

    Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.

  7. An Empirical Investigation of Entrepreneurship Intensity in Iranian State Universities

    ERIC Educational Resources Information Center

    Mazdeh, Mohammad Mahdavi; Razavi, Seyed-Mostafa; Hesamamiri, Roozbeh; Zahedi, Mohammad-Reza; Elahi, Behin

    2013-01-01

    The purpose of this study is to propose a framework to evaluate the entrepreneurship intensity (EI) of Iranian state universities. In order to determine EI, a hybrid multi-method framework consisting of Delphi, Analytic Network Process (ANP), and VIKOR is proposed. The Delphi method is used to localize and reduce the number of criteria extracted…

  8. A Gradient Taguchi Method for Engineering Optimization

    NASA Astrophysics Data System (ADS)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  9. An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information

    NASA Astrophysics Data System (ADS)

    Tsuruta, Masanobu; Masuyama, Shigeru

    We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.

  10. Reflection measurement of waveguide-injected high-power microwave antennas.

    PubMed

    Yuan, Chengwei; Peng, Shengren; Shu, Ting; Zhang, Qiang; Zhao, Xuelong

    2015-12-01

    A method for reflection measurements of High-power Microwave (HPM) antennas excited with overmoded waveguides is proposed and studied systemically. In theory, principle of the method is proposed and the data processing formulas are developed. In simulations, a horn antenna excited by a TE11 mode exciter is examined and its reflection is calculated by CST Microwave Studio and by the method proposed in this article, respectively. In experiments, reflection measurements of two HPM antennas are conducted, and the measured results are well consistent with the theoretical expectations.

  11. UNCLES: method for the identification of genes differentially consistently co-expressed in a specific subset of datasets.

    PubMed

    Abu-Jamous, Basel; Fa, Rui; Roberts, David J; Nandi, Asoke K

    2015-06-04

    Collective analysis of the increasingly emerging gene expression datasets are required. The recently proposed binarisation of consensus partition matrices (Bi-CoPaM) method can combine clustering results from multiple datasets to identify the subsets of genes which are consistently co-expressed in all of the provided datasets in a tuneable manner. However, results validation and parameter setting are issues that complicate the design of such methods. Moreover, although it is a common practice to test methods by application to synthetic datasets, the mathematical models used to synthesise such datasets are usually based on approximations which may not always be sufficiently representative of real datasets. Here, we propose an unsupervised method for the unification of clustering results from multiple datasets using external specifications (UNCLES). This method has the ability to identify the subsets of genes consistently co-expressed in a subset of datasets while being poorly co-expressed in another subset of datasets, and to identify the subsets of genes consistently co-expressed in all given datasets. We also propose the M-N scatter plots validation technique and adopt it to set the parameters of UNCLES, such as the number of clusters, automatically. Additionally, we propose an approach for the synthesis of gene expression datasets using real data profiles in a way which combines the ground-truth-knowledge of synthetic data and the realistic expression values of real data, and therefore overcomes the problem of faithfulness of synthetic expression data modelling. By application to those datasets, we validate UNCLES while comparing it with other conventional clustering methods, and of particular relevance, biclustering methods. We further validate UNCLES by application to a set of 14 real genome-wide yeast datasets as it produces focused clusters that conform well to known biological facts. Furthermore, in-silico-based hypotheses regarding the function of a few previously unknown genes in those focused clusters are drawn. The UNCLES method, the M-N scatter plots technique, and the expression data synthesis approach will have wide application for the comprehensive analysis of genomic and other sources of multiple complex biological datasets. Moreover, the derived in-silico-based biological hypotheses represent subjects for future functional studies.

  12. An estimating equation approach to dimension reduction for longitudinal data

    PubMed Central

    Xu, Kelin; Guo, Wensheng; Xiong, Momiao; Zhu, Liping; Jin, Li

    2016-01-01

    Sufficient dimension reduction has been extensively explored in the context of independent and identically distributed data. In this article we generalize sufficient dimension reduction to longitudinal data and propose an estimating equation approach to estimating the central mean subspace. The proposed method accounts for the covariance structure within each subject and improves estimation efficiency when the covariance structure is correctly specified. Even if the covariance structure is misspecified, our estimator remains consistent. In addition, our method relaxes distributional assumptions on the covariates and is doubly robust. To determine the structural dimension of the central mean subspace, we propose a Bayesian-type information criterion. We show that the estimated structural dimension is consistent and that the estimated basis directions are root-\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$n$\\end{document} consistent, asymptotically normal and locally efficient. Simulations and an analysis of the Framingham Heart Study data confirm the effectiveness of our approach. PMID:27017956

  13. Video quality assessment using motion-compensated temporal filtering and manifold feature similarity

    PubMed Central

    Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju

    2017-01-01

    Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately. PMID:28445489

  14. Self Consistent Bathymetric Mapping From Robotic Vehicles in the Deep Ocean

    DTIC Science & Technology

    2005-06-01

    that have been aligned in a consistent manner. Experimental results from the fully automated processing of a multibeam survey over the TAG hydrothermal structure at the Mid-Atlantic ridge are presented to validate the proposed method.

  15. Suppression of fixed pattern noise for infrared image system

    NASA Astrophysics Data System (ADS)

    Park, Changhan; Han, Jungsoo; Bae, Kyung-Hoon

    2008-04-01

    In this paper, we propose suppression of fixed pattern noise (FPN) and compensation of soft defect for improvement of object tracking in cooled staring infrared focal plane array (IRFPA) imaging system. FPN appears an observable image which applies to non-uniformity compensation (NUC) by temperature. Soft defect appears glittering black and white point by characteristics of non-uniformity for IR detector by time. This problem is very important because it happen serious problem for object tracking as well as degradation for image quality. Signal processing architecture in cooled staring IRFPA imaging system consists of three tables: low, normal, high temperature for reference gain and offset values. Proposed method operates two offset tables for each table. This is method which operates six term of temperature on the whole. Proposed method of soft defect compensation consists of three stages: (1) separates sub-image for an image, (2) decides a motion distribution of object between each sub-image, (3) analyzes for statistical characteristic from each stationary fixed pixel. Based on experimental results, the proposed method shows an improved image which suppresses FPN by change of temperature distribution from an observational image in real-time.

  16. Adaptive noise canceling of electrocardiogram artifacts in single channel electroencephalogram.

    PubMed

    Cho, Sung Pil; Song, Mi Hye; Park, Young Cheol; Choi, Ho Seon; Lee, Kyoung Joung

    2007-01-01

    A new method for estimating and eliminating electrocardiogram (ECG) artifacts from single channel scalp electroencephalogram (EEG) is proposed. The proposed method consists of emphasis of QRS complex from EEG using least squares acceleration (LSA) filter, generation of synchronized pulse with R-peak and ECG artifacts estimation and elimination using adaptive filter. The performance of the proposed method was evaluated using simulated and real EEG recordings, we found that the ECG artifacts were successfully estimated and eliminated in comparison with the conventional multi-channel techniques, which are independent component analysis (ICA) and ensemble average (EA) method. From this we can conclude that the proposed method is useful for the detecting and eliminating the ECG artifacts from single channel EEG and simple to use for ambulatory/portable EEG monitoring system.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  18. A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations

    PubMed Central

    Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia

    2015-01-01

    Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738

  19. WE-AB-207A-02: John’s Equation Based Consistency Condition and Incomplete Projection Restoration Upon Circular Orbit CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, J; Qi, H; Wu, S

    Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method ismore » proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections, especially for their high frequency component.« less

  20. Branch-Based Model for the Diameters of the Pulmonary Airways: Accounting for Departures From Self-Consistency and Registration Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neradilek, Moni B.; Polissar, Nayak L.; Einstein, Daniel R.

    2012-04-24

    We examine a previously published branch-based approach to modeling airway diameters that is predicated on the assumption of self-consistency across all levels of the tree. We mathematically formulate this assumption, propose a method to test it and develop a more general model to be used when the assumption is violated. We discuss the effect of measurement error on the estimated models and propose methods that account for it. The methods are illustrated on data from MRI and CT images of silicone casts of two rats, two normal monkeys and one ozone-exposed monkey. Our results showed substantial departures from self-consistency inmore » all five subjects. When departures from selfconsistency exist we do not recommend using the self-consistency model, even as an approximation, as we have shown that it may likely lead to an incorrect representation of the diameter geometry. Measurement error has an important impact on the estimated morphometry models and needs to be accounted for in the analysis.« less

  1. Analytical model for effect of temperature variation on PSF consistency in wavefront coding infrared imaging system

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Zhang, Chengshuo; Xu, Baoshu; Zhang, Xiaodong

    2016-05-01

    The point spread function (PSF) inconsistency caused by temperature variation leads to artifacts in decoded images of a wavefront coding infrared imaging system. Therefore, this paper proposes an analytical model for the effect of temperature variation on the PSF consistency. In the proposed model, a formula for the thermal deformation of an optical phase mask is derived. This formula indicates that a cubic optical phase mask (CPM) is still cubic after thermal deformation. A proposed equivalent cubic phase mask (E-CPM) is a virtual and room-temperature lens which characterizes the optical effect of temperature variation on the CPM. Additionally, a calculating method for PSF consistency after temperature variation is presented. Numerical simulation illustrates the validity of the proposed model and some significant conclusions are drawn. Given the form parameter, the PSF consistency achieved by a Ge-material CPM is better than the PSF consistency by a ZnSe-material CPM. The effect of the optical phase mask on PSF inconsistency is much slighter than that of the auxiliary lens group. A large form parameter of the CPM will introduce large defocus-insensitive aberrations, which improves the PSF consistency but degrades the room-temperature MTF.

  2. Competitive region orientation code for palmprint verification and identification

    NASA Astrophysics Data System (ADS)

    Tang, Wenliang

    2015-11-01

    Orientation features of the palmprint have been widely investigated in coding-based palmprint-recognition methods. Conventional orientation-based coding methods usually used discrete filters to extract the orientation feature of palmprint. However, in real operations, the orientations of the filter usually are not consistent with the lines of the palmprint. We thus propose a competitive region orientation-based coding method. Furthermore, an effective weighted balance scheme is proposed to improve the accuracy of the extracted region orientation. Compared with conventional methods, the region orientation of the palmprint extracted using the proposed method can precisely and robustly describe the orientation feature of the palmprint. Extensive experiments on the baseline PolyU and multispectral palmprint databases are performed and the results show that the proposed method achieves a promising performance in comparison to conventional state-of-the-art orientation-based coding methods in both palmprint verification and identification.

  3. A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.

    2006-06-01

    Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.

  4. Reference point detection for camera-based fingerprint image based on wavelet transformation.

    PubMed

    Khalil, Mohammed S

    2015-04-30

    Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.

  5. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  6. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less

  7. Self-consistent collective coordinate for reaction path and inertial mass

    NASA Astrophysics Data System (ADS)

    Wen, Kai; Nakatsukasa, Takashi

    2016-11-01

    We propose a numerical method to determine the optimal collective reaction path for a nucleus-nucleus collision, based on the adiabatic self-consistent collective coordinate (ASCC) method. We use an iterative method, combining the imaginary-time evolution and the finite amplitude method, for the solution of the ASCC coupled equations. It is applied to the simplest case, α -α scattering. We determine the collective path, the potential, and the inertial mass. The results are compared with other methods, such as the constrained Hartree-Fock method, Inglis's cranking formula, and the adiabatic time-dependent Hartree-Fock (ATDHF) method.

  8. Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz

    This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less

  9. An Implementation of Privacy Protection for a Surveillance Camera Using ROI Coding of JPEG2000 with Face Detection

    NASA Astrophysics Data System (ADS)

    Muneyasu, Mitsuji; Odani, Shuhei; Kitaura, Yoshihiro; Namba, Hitoshi

    On the use of a surveillance camera, there is a case where privacy protection should be considered. This paper proposes a new privacy protection method by automatically degrading the face region in surveillance images. The proposed method consists of ROI coding of JPEG2000 and a face detection method based on template matching. The experimental result shows that the face region can be detected and hidden correctly.

  10. A methodological survey identified eight proposed frameworks for the adaptation of health related guidelines.

    PubMed

    Darzi, Andrea; Abou-Jaoude, Elias A; Agarwal, Arnav; Lakis, Chantal; Wiercioch, Wojtek; Santesso, Nancy; Brax, Hneine; El-Jardali, Fadi; Schünemann, Holger J; Akl, Elie A

    2017-06-01

    Our objective was to identify and describe published frameworks for adaptation of clinical, public health, and health services guidelines. We included reports describing methods of adaptation of guidelines in sufficient detail to allow its reproducibility. We searched Medline and EMBASE databases. We also searched personal files, as well manuals and handbooks of organizations and professional societies that proposed methods of adaptation and adoption of guidelines. We followed standard systematic review methodology. Our search captured 12,021 citations, out of which we identified eight proposed methods of guidelines adaptation: ADAPTE, Adapted ADAPTE, Alberta Ambassador Program adaptation phase, GRADE-ADOLOPMENT, MAGIC, RAPADAPTE, Royal College of Nursing (RCN), and Systematic Guideline Review (SGR). The ADAPTE framework consists of a 24-step process to adapt guidelines to a local context taking into consideration the needs, priorities, legislation, policies, and resources. The Alexandria Center for Evidence-Based Clinical Practice Guidelines updated one of ADAPTE's tools, modified three tools, and added three new ones. In addition, they proposed optionally using three other tools. The Alberta Ambassador Program adaptation phase consists of 11 steps and focused on adapting good-quality guidelines for nonspecific low back pain into local context. GRADE-ADOLOPMENT is an eight-step process based on the GRADE Working Group's Evidence to Decision frameworks and applied in 22 guidelines in the context of national guideline development program. The MAGIC research program developed a five-step adaptation process, informed by ADAPTE and the GRADE approach in the context of adapting thrombosis guidelines. The RAPADAPTE framework consists of 12 steps based on ADAPTE and using synthesized evidence databases, retrospectively derived from the experience of producing a high-quality guideline for the treatment of breast cancer with limited resources in Costa Rica. The RCN outlines five key steps strategy for adaptation of guidelines to the local context. The SGR method consists of nine steps and takes into consideration both methodological gaps and context-specific normative issues in source guidelines. We identified through searching personal files two abandoned methods. We identified and described eight proposed frameworks for the adaptation of health-related guidelines. There is a need to evaluate these different frameworks to assess rigor, efficiency, and transparency of their proposed processes. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction

    PubMed Central

    Lu, Hongyang; Wei, Jingbo; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. PMID:27110235

  12. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.

    PubMed

    Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  13. Method for Predicting Thermal Buckling in Rails

    DOT National Transportation Integrated Search

    2018-01-01

    A method is proposed herein for predicting the onset of thermal buckling in rails in such a way as to provide a means of avoiding this type of potentially devastating failure. The method consists of the development of a thermomechanical model of rail...

  14. Total Variation with Overlapping Group Sparsity for Image Deblurring under Impulse Noise

    PubMed Central

    Liu, Gang; Huang, Ting-Zhu; Liu, Jun; Lv, Xiao-Guang

    2015-01-01

    The total variation (TV) regularization method is an effective method for image deblurring in preserving edges. However, the TV based solutions usually have some staircase effects. In order to alleviate the staircase effects, we propose a new model for restoring blurred images under impulse noise. The model consists of an ℓ1-fidelity term and a TV with overlapping group sparsity (OGS) regularization term. Moreover, we impose a box constraint to the proposed model for getting more accurate solutions. The solving algorithm for our model is under the framework of the alternating direction method of multipliers (ADMM). We use an inner loop which is nested inside the majorization minimization (MM) iteration for the subproblem of the proposed method. Compared with other TV-based methods, numerical results illustrate that the proposed method can significantly improve the restoration quality, both in terms of peak signal-to-noise ratio (PSNR) and relative error (ReE). PMID:25874860

  15. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  16. Prediction of Human Phenotype Ontology terms by means of hierarchical ensemble methods.

    PubMed

    Notaro, Marco; Schubach, Max; Robinson, Peter N; Valentini, Giorgio

    2017-10-12

    The prediction of human gene-abnormal phenotype associations is a fundamental step toward the discovery of novel genes associated with human disorders, especially when no genes are known to be associated with a specific disease. In this context the Human Phenotype Ontology (HPO) provides a standard categorization of the abnormalities associated with human diseases. While the problem of the prediction of gene-disease associations has been widely investigated, the related problem of gene-phenotypic feature (i.e., HPO term) associations has been largely overlooked, even if for most human genes no HPO term associations are known and despite the increasing application of the HPO to relevant medical problems. Moreover most of the methods proposed in literature are not able to capture the hierarchical relationships between HPO terms, thus resulting in inconsistent and relatively inaccurate predictions. We present two hierarchical ensemble methods that we formally prove to provide biologically consistent predictions according to the hierarchical structure of the HPO. The modular structure of the proposed methods, that consists in a "flat" learning first step and a hierarchical combination of the predictions in the second step, allows the predictions of virtually any flat learning method to be enhanced. The experimental results show that hierarchical ensemble methods are able to predict novel associations between genes and abnormal phenotypes with results that are competitive with state-of-the-art algorithms and with a significant reduction of the computational complexity. Hierarchical ensembles are efficient computational methods that guarantee biologically meaningful predictions that obey the true path rule, and can be used as a tool to improve and make consistent the HPO terms predictions starting from virtually any flat learning method. The implementation of the proposed methods is available as an R package from the CRAN repository.

  17. Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.

    PubMed

    Kim, Eunwoo; Park, HyunWook

    2017-02-01

    The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.

  18. Adaptive characterization of recrystallization kinetics in IF steel by electron backscatter diffraction.

    PubMed

    Kim, Dong-Kyu; Park, Won-Woong; Lee, Ho Won; Kang, Seong-Hoon; Im, Yong-Taek

    2013-12-01

    In this study, a rigorous methodology for quantifying recrystallization kinetics by electron backscatter diffraction is proposed in order to reduce errors associated with the operator's skill. An adaptive criterion to determine adjustable grain orientation spread depending on the recrystallization stage is proposed to better identify the recrystallized grains in the partially recrystallized microstructure. The proposed method was applied in characterizing the microstructure evolution during annealing of interstitial-free steel cold rolled to low and high true strain levels of 0.7 and 1.6, respectively. The recrystallization kinetics determined by the proposed method was found to be consistent with the standard method of Vickers microhardness. The application of the proposed method to the overall recrystallization stages showed that it can be used for the rigorous characterization of progressive microstructure evolution, especially for the severely deformed material. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  19. Data consistency-driven scatter kernel optimization for x-ray cone-beam CT

    NASA Astrophysics Data System (ADS)

    Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong

    2015-08-01

    Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.

  20. Self-consistent description of a system of interacting phonons

    NASA Astrophysics Data System (ADS)

    Poluektov, Yu. M.

    2015-11-01

    A proposal for a method of self-consistent description of phonon systems. This method generalizes the Debye model to account for phonon-phonon interaction. The idea of "self-consistent" phonons is introduced; their speed depends on the temperature and is determined by solving a non-linear equation. The Debye energy is also a function of the temperature within the framework of the proposed approach. The thermodynamics of "self-consistent" phonon gas are built. It is shown that at low temperatures the cubic law temperature dependence of specific heat acquires an additional term that is proportional to the seventh power of the temperature. This seems to explain the reason why the cubic law for specific heat is observed only at relatively low temperatures. At high temperatures, the theory predicts a linear deviation with respect to temperature from the Dulong-Petit law, which is observed experimentally. A modification to the melting criteria is considered, to account for the phonon-phonon interaction.

  1. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  2. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  3. Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.

    PubMed

    Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen

    2015-05-01

    Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.

  4. Estimating the number of people in crowded scenes

    NASA Astrophysics Data System (ADS)

    Kim, Minjin; Kim, Wonjun; Kim, Changick

    2011-01-01

    This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.

  5. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    NASA Astrophysics Data System (ADS)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  6. Direct optical band gap measurement in polycrystalline semiconductors: A critical look at the Tauc method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolgonos, Alex; Mason, Thomas O.; Poeppelmeier, Kenneth R., E-mail: krp@northwestern.edu

    2016-08-15

    The direct optical band gap of semiconductors is traditionally measured by extrapolating the linear region of the square of the absorption curve to the x-axis, and a variation of this method, developed by Tauc, has also been widely used. The application of the Tauc method to crystalline materials is rooted in misconception–and traditional linear extrapolation methods are inappropriate for use on degenerate semiconductors, where the occupation of conduction band energy states cannot be ignored. A new method is proposed for extracting a direct optical band gap from absorption spectra of degenerately-doped bulk semiconductors. This method was applied to pseudo-absorption spectramore » of Sn-doped In{sub 2}O{sub 3} (ITO)—converted from diffuse-reflectance measurements on bulk specimens. The results of this analysis were corroborated by room-temperature photoluminescence excitation measurements, which yielded values of optical band gap and Burstein–Moss shift that are consistent with previous studies on In{sub 2}O{sub 3} single crystals and thin films. - Highlights: • The Tauc method of band gap measurement is re-evaluated for crystalline materials. • Graphical method proposed for extracting optical band gaps from absorption spectra. • The proposed method incorporates an energy broadening term for energy transitions. • Values for ITO were self-consistent between two different measurement methods.« less

  7. New equivalent-electrical circuit model and a practical measurement method for human body impedance.

    PubMed

    Chinen, Koyu; Kinjo, Ichiko; Zamami, Aki; Irei, Kotoyo; Nagayama, Kanako

    2015-01-01

    Human body impedance analysis is an effective tool to extract electrical information from tissues in the human body. This paper presents a new measurement method of impedance using armpit electrode and a new equivalent circuit model for the human body. The lowest impedance was measured by using an LCR meter and six electrodes including armpit electrodes. The electrical equivalent circuit model for the cell consists of resistance R and capacitance C. The R represents electrical resistance of the liquid of the inside and outside of the cell, and the C represents high frequency conductance of the cell membrane. We propose an equivalent circuit model which consists of five parallel high frequency-passing CR circuits. The proposed equivalent circuit represents alpha distribution in the impedance measured at a lower frequency range due to ion current of the outside of the cell, and beta distribution at a high frequency range due to the cell membrane and the liquid inside cell. The calculated values by using the proposed equivalent circuit model were consistent with the measured values for the human body impedance.

  8. Three-step interferometric method with blind phase shifts by use of interframe correlation between interferograms

    NASA Astrophysics Data System (ADS)

    Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.

    2018-06-01

    A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.

  9. New sulphiding method for steel and cast iron parts

    NASA Astrophysics Data System (ADS)

    Tarelnyk, V.; Martsynkovskyy, V.; Gaponova, O.; Konoplianchenko, Ie; Dovzyk, M.; Tarelnyk, N.; Gorovoy, S.

    2017-08-01

    A new method for sulphiding steel and cast iron part surfaces by electroerosion alloying (EEA) with the use of a special electrode is proposed, which method is characterized in that while manufacturing the electrode, on its surface, in any known manner (punching, threading, pulling, etc.), there is formed at least a recess to be filled with sulfur as a consistent material, and then there is produced EEA by the obtained electrode without waiting for the consistent material to become dried.

  10. Multiple disturbances classifier for electric signals using adaptive structuring neural networks

    NASA Astrophysics Data System (ADS)

    Lu, Yen-Ling; Chuang, Cheng-Long; Fahn, Chin-Shyurng; Jiang, Joe-Air

    2008-07-01

    This work proposes a novel classifier to recognize multiple disturbances for electric signals of power systems. The proposed classifier consists of a series of pipeline-based processing components, including amplitude estimator, transient disturbance detector, transient impulsive detector, wavelet transform and a brand-new neural network for recognizing multiple disturbances in a power quality (PQ) event. Most of the previously proposed methods usually treated a PQ event as a single disturbance at a time. In practice, however, a PQ event often consists of various types of disturbances at the same time. Therefore, the performances of those methods might be limited in real power systems. This work considers the PQ event as a combination of several disturbances, including steady-state and transient disturbances, which is more analogous to the real status of a power system. Six types of commonly encountered power quality disturbances are considered for training and testing the proposed classifier. The proposed classifier has been tested on electric signals that contain single disturbance or several disturbances at a time. Experimental results indicate that the proposed PQ disturbance classification algorithm can achieve a high accuracy of more than 97% in various complex testing cases.

  11. Shack-Hartmann wavefront sensor with large dynamic range by adaptive spot search method.

    PubMed

    Shinto, Hironobu; Saita, Yusuke; Nomura, Takanori

    2016-07-10

    A Shack-Hartmann wavefront sensor (SHWFS) that consists of a microlens array and an image sensor has been used to measure the wavefront aberrations of human eyes. However, a conventional SHWFS has finite dynamic range depending on the diameter of the each microlens. The dynamic range cannot be easily expanded without a decrease of the spatial resolution. In this study, an adaptive spot search method to expand the dynamic range of an SHWFS is proposed. In the proposed method, spots are searched with the help of their approximate displacements measured with low spatial resolution and large dynamic range. By the proposed method, a wavefront can be correctly measured even if the spot is beyond the detection area. The adaptive spot search method is realized by using the special microlens array that generates both spots and discriminable patterns. The proposed method enables expanding the dynamic range of an SHWFS with a single shot and short processing time. The performance of the proposed method is compared with that of a conventional SHWFS by optical experiments. Furthermore, the dynamic range of the proposed method is quantitatively evaluated by numerical simulations.

  12. Mean-Reverting Portfolio With Budget Constraint

    NASA Astrophysics Data System (ADS)

    Zhao, Ziping; Palomar, Daniel P.

    2018-05-01

    This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.

  13. Robust rotational-velocity-Verlet integration methods.

    PubMed

    Rozmanov, Dmitri; Kusalik, Peter G

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  14. Robust rotational-velocity-Verlet integration methods

    NASA Astrophysics Data System (ADS)

    Rozmanov, Dmitri; Kusalik, Peter G.

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  15. GC-ASM: Synergistic Integration of Graph-Cut and Active Shape Model Strategies for Medical Image Segmentation

    PubMed Central

    Chen, Xinjian; Udupa, Jayaram K.; Alavi, Abass; Torigian, Drew A.

    2013-01-01

    Image segmentation methods may be classified into two categories: purely image based and model based. Each of these two classes has its own advantages and disadvantages. In this paper, we propose a novel synergistic combination of the image based graph-cut (GC) method with the model based ASM method to arrive at the GC-ASM method for medical image segmentation. A multi-object GC cost function is proposed which effectively integrates the ASM shape information into the GC framework. The proposed method consists of two phases: model building and segmentation. In the model building phase, the ASM model is built and the parameters of the GC are estimated. The segmentation phase consists of two main steps: initialization (recognition) and delineation. For initialization, an automatic method is proposed which estimates the pose (translation, orientation, and scale) of the model, and obtains a rough segmentation result which also provides the shape information for the GC method. For delineation, an iterative GC-ASM algorithm is proposed which performs finer delineation based on the initialization results. The proposed methods are implemented to operate on 2D images and evaluated on clinical chest CT, abdominal CT, and foot MRI data sets. The results show the following: (a) An overall delineation accuracy of TPVF > 96%, FPVF < 0.6% can be achieved via GC-ASM for different objects, modalities, and body regions. (b) GC-ASM improves over ASM in its accuracy and precision to search region. (c) GC-ASM requires far fewer landmarks (about 1/3 of ASM) than ASM. (d) GC-ASM achieves full automation in the segmentation step compared to GC which requires seed specification and improves on the accuracy of GC. (e) One disadvantage of GC-ASM is its increased computational expense owing to the iterative nature of the algorithm. PMID:23585712

  16. GC-ASM: Synergistic Integration of Graph-Cut and Active Shape Model Strategies for Medical Image Segmentation.

    PubMed

    Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A

    2013-05-01

    Image segmentation methods may be classified into two categories: purely image based and model based. Each of these two classes has its own advantages and disadvantages. In this paper, we propose a novel synergistic combination of the image based graph-cut (GC) method with the model based ASM method to arrive at the GC-ASM method for medical image segmentation. A multi-object GC cost function is proposed which effectively integrates the ASM shape information into the GC framework. The proposed method consists of two phases: model building and segmentation. In the model building phase, the ASM model is built and the parameters of the GC are estimated. The segmentation phase consists of two main steps: initialization (recognition) and delineation. For initialization, an automatic method is proposed which estimates the pose (translation, orientation, and scale) of the model, and obtains a rough segmentation result which also provides the shape information for the GC method. For delineation, an iterative GC-ASM algorithm is proposed which performs finer delineation based on the initialization results. The proposed methods are implemented to operate on 2D images and evaluated on clinical chest CT, abdominal CT, and foot MRI data sets. The results show the following: (a) An overall delineation accuracy of TPVF > 96%, FPVF < 0.6% can be achieved via GC-ASM for different objects, modalities, and body regions. (b) GC-ASM improves over ASM in its accuracy and precision to search region. (c) GC-ASM requires far fewer landmarks (about 1/3 of ASM) than ASM. (d) GC-ASM achieves full automation in the segmentation step compared to GC which requires seed specification and improves on the accuracy of GC. (e) One disadvantage of GC-ASM is its increased computational expense owing to the iterative nature of the algorithm.

  17. Detecting text in natural scenes with multi-level MSER and SWT

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Liu, Renjun

    2018-04-01

    The detection of the characters in the natural scene is susceptible to factors such as complex background, variable viewing angle and diverse forms of language, which leads to poor detection results. Aiming at these problems, a new text detection method was proposed, which consisted of two main stages, candidate region extraction and text region detection. At first stage, the method used multiple scale transformations of original image and multiple thresholds of maximally stable extremal regions (MSER) to detect the text regions which could detect character regions comprehensively. At second stage, obtained SWT maps by using the stroke width transform (SWT) algorithm to compute the candidate regions, then using cascaded classifiers to propose non-text regions. The proposed method was evaluated on the standard benchmark datasets of ICDAR2011 and the datasets that we made our own data sets. The experiment results showed that the proposed method have greatly improved that compared to other text detection methods.

  18. Applying operational research and data mining to performance based medical personnel motivation system.

    PubMed

    Niaksu, Olegas; Zaptorius, Jonas

    2014-01-01

    This paper presents the methodology suitable for creation of a performance related remuneration system in healthcare sector, which would meet requirements for efficiency and sustainable quality of healthcare services. Methodology for performance indicators selection, ranking and a posteriori evaluation has been proposed and discussed. Priority Distribution Method is applied for unbiased performance criteria weighting. Data mining methods are proposed to monitor and evaluate the results of motivation system.We developed a method for healthcare specific criteria selection consisting of 8 steps; proposed and demonstrated application of Priority Distribution Method for the selected criteria weighting. Moreover, a set of data mining methods for evaluation of the motivational system outcomes was proposed. The described methodology for calculating performance related payment needs practical approbation. We plan to develop semi-automated tools for institutional and personal performance indicators monitoring. The final step would be approbation of the methodology in a healthcare facility.

  19. Exploiting salient semantic analysis for information retrieval

    NASA Astrophysics Data System (ADS)

    Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui

    2016-11-01

    Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.

  20. Infrared and visible image fusion with spectral graph wavelet transform.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo

    2015-09-01

    Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.

  1. Validation of a method for assessing resident physicians' quality improvement proposals.

    PubMed

    Leenstra, James L; Beckman, Thomas J; Reed, Darcy A; Mundell, William C; Thomas, Kris G; Krajicek, Bryan J; Cha, Stephen S; Kolars, Joseph C; McDonald, Furman S

    2007-09-01

    Residency programs involve trainees in quality improvement (QI) projects to evaluate competency in systems-based practice and practice-based learning and improvement. Valid approaches to assess QI proposals are lacking. We developed an instrument for assessing resident QI proposals--the Quality Improvement Proposal Assessment Tool (QIPAT-7)-and determined its validity and reliability. QIPAT-7 content was initially obtained from a national panel of QI experts. Through an iterative process, the instrument was refined, pilot-tested, and revised. Seven raters used the instrument to assess 45 resident QI proposals. Principal factor analysis was used to explore the dimensionality of instrument scores. Cronbach's alpha and intraclass correlations were calculated to determine internal consistency and interrater reliability, respectively. QIPAT-7 items comprised a single factor (eigenvalue = 3.4) suggesting a single assessment dimension. Interrater reliability for each item (range 0.79 to 0.93) and internal consistency reliability among the items (Cronbach's alpha = 0.87) were high. This method for assessing resident physician QI proposals is supported by content and internal structure validity evidence. QIPAT-7 is a useful tool for assessing resident QI proposals. Future research should determine the reliability of QIPAT-7 scores in other residency and fellowship training programs. Correlations should also be made between assessment scores and criteria for QI proposal success such as implementation of QI proposals, resident scholarly productivity, and improved patient outcomes.

  2. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors

    PubMed Central

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-01-01

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028

  3. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.

    PubMed

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-04-03

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.

  4. Water supply management using an extended group fuzzy decision-making method: a case study in north-eastern Iran

    NASA Astrophysics Data System (ADS)

    Minatour, Yasser; Bonakdari, Hossein; Zarghami, Mahdi; Bakhshi, Maryam Ali

    2015-09-01

    The purpose of this study was to develop a group fuzzy multi-criteria decision-making method to be applied in rating problems associated with water resources management. Thus, here Chen's group fuzzy TOPSIS method extended by a difference technique to handle uncertainties of applying a group decision making. Then, the extended group fuzzy TOPSIS method combined with a consistency check. In the presented method, initially linguistic judgments are being surveyed via a consistency checking process, and afterward these judgments are being used in the extended Chen's fuzzy TOPSIS method. Here, each expert's opinion is turned to accurate mathematical numbers and, then, to apply uncertainties, the opinions of group are turned to fuzzy numbers using three mathematical operators. The proposed method is applied to select the optimal strategy for the rural water supply of Nohoor village in north-eastern Iran, as a case study and illustrated example. Sensitivity analyses test over results and comparing results with project reality showed that proposed method offered good results for water resources projects.

  5. Quantum-Like Bayesian Networks for Modeling Decision Making

    PubMed Central

    Moreira, Catarina; Wichert, Andreas

    2016-01-01

    In this work, we explore an alternative quantum structure to perform quantum probabilistic inferences to accommodate the paradoxical findings of the Sure Thing Principle. We propose a Quantum-Like Bayesian Network, which consists in replacing classical probabilities by quantum probability amplitudes. However, since this approach suffers from the problem of exponential growth of quantum parameters, we also propose a similarity heuristic that automatically fits quantum parameters through vector similarities. This makes the proposed model general and predictive in contrast to the current state of the art models, which cannot be generalized for more complex decision scenarios and that only provide an explanatory nature for the observed paradoxes. In the end, the model that we propose consists in a nonparametric method for estimating inference effects from a statistical point of view. It is a statistical model that is simpler than the previous quantum dynamic and quantum-like models proposed in the literature. We tested the proposed network with several empirical data from the literature, mainly from the Prisoner's Dilemma game and the Two Stage Gambling game. The results obtained show that the proposed quantum Bayesian Network is a general method that can accommodate violations of the laws of classical probability theory and make accurate predictions regarding human decision-making in these scenarios. PMID:26858669

  6. Variable screening via quantile partial correlation

    PubMed Central

    Ma, Shujie; Tsai, Chih-Ling

    2016-01-01

    In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683

  7. A method for the reduction of aerodynamic drag of road vehicles

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Taylor, Larry W.; Leary, Terrance O.

    1990-01-01

    A method is proposed for the reduction of the aerodynamic drag of bluff bodies, particularly for application to road transport vehicles. This technique consists of installation of panels on the forward surface of the vehicle facing the airstream. With the help of road tests, it was demonstrated that the attachment of proposed panels can reduce aerodynamic drag of road vehicles and result in significant fuel cost savings and conservation of energy resources.

  8. Population clustering based on copy number variations detected from next generation sequencing data.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Wan, Mingxi; Deng, Hong-Wen; Wang, Yu-Ping

    2014-08-01

    Copy number variations (CNVs) can be used as significant bio-markers and next generation sequencing (NGS) provides a high resolution detection of these CNVs. But how to extract features from CNVs and further apply them to genomic studies such as population clustering have become a big challenge. In this paper, we propose a novel method for population clustering based on CNVs from NGS. First, CNVs are extracted from each sample to form a feature matrix. Then, this feature matrix is decomposed into the source matrix and weight matrix with non-negative matrix factorization (NMF). The source matrix consists of common CNVs that are shared by all the samples from the same group, and the weight matrix indicates the corresponding level of CNVs from each sample. Therefore, using NMF of CNVs one can differentiate samples from different ethnic groups, i.e. population clustering. To validate the approach, we applied it to the analysis of both simulation data and two real data set from the 1000 Genomes Project. The results on simulation data demonstrate that the proposed method can recover the true common CNVs with high quality. The results on the first real data analysis show that the proposed method can cluster two family trio with different ancestries into two ethnic groups and the results on the second real data analysis show that the proposed method can be applied to the whole-genome with large sample size consisting of multiple groups. Both results demonstrate the potential of the proposed method for population clustering.

  9. Estimating nonrigid motion from inconsistent intensity with robust shape features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Ruan, Dan, E-mail: druan@mednet.ucla.edu; Department of Radiation Oncology, University of California, Los Angeles, California 90095

    2013-12-15

    Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, andmore » regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. Conclusions: The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.« less

  10. A method for smoothing segmented lung boundary in chest CT images

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  11. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    PubMed

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  12. Search automation of the generalized method of device operational characteristics improvement

    NASA Astrophysics Data System (ADS)

    Petrova, I. Yu; Puchkova, A. A.; Zaripova, V. M.

    2017-01-01

    The article presents brief results of analysis of existing search methods of the closest patents, which can be applied to determine generalized methods of device operational characteristics improvement. There were observed the most widespread clustering algorithms and metrics for determining the proximity degree between two documents. The article proposes the technique of generalized methods determination; it has two implementation variants and consists of 7 steps. This technique has been implemented in the “Patents search” subsystem of the “Intellect” system. Also the article gives an example of the use of the proposed technique.

  13. Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Alshehhi, Rasha; Marpu, Prashanth Reddy

    2017-04-01

    Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.

  14. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  15. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  16. A technology mapping based on graph of excitations and outputs for finite state machines

    NASA Astrophysics Data System (ADS)

    Kania, Dariusz; Kulisz, Józef

    2017-11-01

    A new, efficient technology mapping method of FSMs, dedicated for PAL-based PLDs is proposed. The essence of the method consists in searching for the minimal set of PAL-based logic blocks that cover a set of multiple-output implicants describing the transition and output functions of an FSM. The method is based on a new concept of graph: the Graph of Excitations and Outputs. The proposed algorithm was tested using the FSM benchmarks. The obtained results were compared with the classical technology mapping of FSM.

  17. A Method of Character Detection and Segmentation for Highway Guide Signs

    NASA Astrophysics Data System (ADS)

    Xu, Jiawei; Zhang, Chongyang

    2018-01-01

    In this paper, a method of character detection and segmentation for highway signs in China is proposed. It consists of four steps. Firstly, the highway sign area is detectedby colour and geometric features, andthe possible character region is obtained by multi-level projection strategy. Secondly, pseudo target character region is removed by local binary patterns (LBP) feature. Thirdly, convolutional neural network (CNN)is used to classify target regions. Finally, adaptive projection strategies are used to segment characters strings. Experimental results indicate that the proposed method achieves new state-of-the-art results.

  18. Microblog sentiment analysis using social and topic context.

    PubMed

    Zou, Xiaomei; Yang, Jing; Zhang, Jianpei

    2018-01-01

    Analyzing massive user-generated microblogs is very crucial in many fields, attracting many researchers to study. However, it is very challenging to process such noisy and short microblogs. Most prior works only use texts to identify sentiment polarity and assume that microblogs are independent and identically distributed, which ignore microblogs are networked data. Therefore, their performance is not usually satisfactory. Inspired by two sociological theories (sentimental consistency and emotional contagion), in this paper, we propose a new method combining social context and topic context to analyze microblog sentiment. In particular, different from previous work using direct user relations, we introduce structure similarity context into social contexts and propose a method to measure structure similarity. In addition, we also introduce topic context to model the semantic relations between microblogs. Social context and topic context are combined by the Laplacian matrix of the graph built by these contexts and Laplacian regularization are added into the microblog sentiment analysis model. Experimental results on two real Twitter datasets demonstrate that our proposed model can outperform baseline methods consistently and significantly.

  19. Consistent lattice Boltzmann methods for incompressible axisymmetric flows

    NASA Astrophysics Data System (ADS)

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Yin, Linmao; Zhao, Ya; Chew, Jia Wei

    2016-08-01

    In this work, consistent lattice Boltzmann (LB) methods for incompressible axisymmetric flows are developed based on two efficient axisymmetric LB models available in the literature. In accord with their respective original models, the proposed axisymmetric models evolve within the framework of the standard LB method and the source terms contain no gradient calculations. Moreover, the incompressibility conditions are realized with the Hermite expansion, thus the compressibility errors arising in the existing models are expected to be reduced by the proposed incompressible models. In addition, an extra relaxation parameter is added to the Bhatnagar-Gross-Krook collision operator to suppress the effect of the ghost variable and thus the numerical stability of the present models is significantly improved. Theoretical analyses, based on the Chapman-Enskog expansion and the equivalent moment system, are performed to derive the macroscopic equations from the LB models and the resulting truncation terms (i.e., the compressibility errors) are investigated. In addition, numerical validations are carried out based on four well-acknowledged benchmark tests and the accuracy and applicability of the proposed incompressible axisymmetric LB models are verified.

  20. Inference of Vohradský's Models of Genetic Networks by Solving Two-Dimensional Function Optimization Problems

    PubMed Central

    Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko

    2013-01-01

    The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175

  1. Estimating nonrigid motion from inconsistent intensity with robust shape features.

    PubMed

    Liu, Wenyang; Ruan, Dan

    2013-12-01

    To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.

  2. Cluster Correspondence Analysis.

    PubMed

    van de Velden, M; D'Enza, A Iodice; Palumbo, F

    2017-03-01

    A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.

  3. Enhanced cortical thickness measurements for rodent brains via Lagrangian-based RK4 streamline computation

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Kim, Sun Hyung; Oguz, Ipek; Styner, Martin

    2016-03-01

    The cortical thickness of the mammalian brain is an important morphological characteristic that can be used to investigate and observe the brain's developmental changes that might be caused by biologically toxic substances such as ethanol or cocaine. Although various cortical thickness analysis methods have been proposed that are applicable for human brain and have developed into well-validated open-source software packages, cortical thickness analysis methods for rodent brains have not yet become as robust and accurate as those designed for human brains. Based on a previously proposed cortical thickness measurement pipeline for rodent brain analysis,1 we present an enhanced cortical thickness pipeline in terms of accuracy and anatomical consistency. First, we propose a Lagrangian-based computational approach in the thickness measurement step in order to minimize local truncation error using the fourth-order Runge-Kutta method. Second, by constructing a line object for each streamline of the thickness measurement, we can visualize the way the thickness is measured and achieve sub-voxel accuracy by performing geometric post-processing. Last, with emphasis on the importance of an anatomically consistent partial differential equation (PDE) boundary map, we propose an automatic PDE boundary map generation algorithm that is specific to rodent brain anatomy, which does not require manual labeling. The results show that the proposed cortical thickness pipeline can produce statistically significant regions that are not observed in the previous cortical thickness analysis pipeline.

  4. A Solution Method of Scheduling Problem with Worker Allocation by a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Osawa, Akira; Ida, Kenichi

    In a scheduling problem with worker allocation (SPWA) proposed by Iima et al, the worker's skill level to each machine is all the same. However, each worker has a different skill level for each machine in the real world. For that reason, we propose a new model of SPWA in which a worker has the different skill level to each machine. To solve the problem, we propose a new GA for SPWA consisting of the following new three procedures, shortening of idle time, modifying infeasible solution to feasible solution, and a new selection method for GA. The effectiveness of the proposed algorithm is clarified by numerical experiments using benchmark problems for job-shop scheduling.

  5. Recognition of human activities using depth images of Kinect for biofied building

    NASA Astrophysics Data System (ADS)

    Ogawa, Ami; Mita, Akira

    2015-03-01

    These days, various functions in the living spaces are needed because of an aging society, promotion of energy conservation, and diversification of lifestyles. To meet this requirement, we propose "Biofied Building". The "Biofied Building" is the system learnt from living beings. The various information is accumulated in a database using small sensor agent robots as a key function of this system to control the living spaces. Among the various kinds of information about the living spaces, especially human activities can be triggers for lighting or air conditioning control. By doing so, customized space is possible. Human activities are divided into two groups, the activities consisting of single behavior and the activities consisting of multiple behaviors. For example, "standing up" or "sitting down" consists of a single behavior. These activities are accompanied by large motions. On the other hand "eating" consists of several behaviors, holding the chopsticks, catching the food, putting them in the mouth, and so on. These are continuous motions. Considering the characteristics of two types of human activities, we individually, use two methods, R transformation and variance. In this paper, we focus on the two different types of human activities, and propose the two methods of human activity recognition methods for construction of the database of living space for "Biofied Building". Finally, we compare the results of both methods.

  6. Brain tumor classification and segmentation using sparse coding and dictionary learning.

    PubMed

    Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo

    2016-08-01

    This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.

  7. Smoothed Particle Hydrodynamics: A consistent model for interfacial multiphase fluid flow simulations

    NASA Astrophysics Data System (ADS)

    Krimi, Abdelkader; Rezoug, Mehdi; Khelladi, Sofiane; Nogueira, Xesús; Deligant, Michael; Ramírez, Luis

    2018-04-01

    In this work, a consistent Smoothed Particle Hydrodynamics (SPH) model to deal with interfacial multiphase fluid flows simulation is proposed. A modification to the Continuum Stress Surface formulation (CSS) [1] to enhance the stability near the fluid interface is developed in the framework of the SPH method. A non-conservative first-order consistency operator is used to compute the divergence of stress surface tensor. This formulation benefits of all the advantages of the one proposed by Adami et al. [2] and, in addition, it can be applied to more than two phases fluid flow simulations. Moreover, the generalized wall boundary conditions [3] are modified in order to be well adapted to multiphase fluid flows with different density and viscosity. In order to allow the application of this technique to wall-bounded multiphase flows, a modification of generalized wall boundary conditions is presented here for using the SPH method. In this work we also present a particle redistribution strategy as an extension of the damping technique presented in [3] to smooth the initial transient phase of gravitational multiphase fluid flow simulations. Several computational tests are investigated to show the accuracy, convergence and applicability of the proposed SPH interfacial multiphase model.

  8. Hybrid particle-field molecular dynamics simulation for polyelectrolyte systems.

    PubMed

    Zhu, You-Liang; Lu, Zhong-Yuan; Milano, Giuseppe; Shi, An-Chang; Sun, Zhao-Yan

    2016-04-14

    To achieve simulations on large spatial and temporal scales with high molecular chemical specificity, a hybrid particle-field method was proposed recently. This method is developed by combining molecular dynamics and self-consistent field theory (MD-SCF). The MD-SCF method has been validated by successfully predicting the experimentally observable properties of several systems. Here we propose an efficient scheme for the inclusion of electrostatic interactions in the MD-SCF framework. In this scheme, charged molecules are interacting with the external fields that are self-consistently determined from the charge densities. This method is validated by comparing the structural properties of polyelectrolytes in solution obtained from the MD-SCF and particle-based simulations. Moreover, taking PMMA-b-PEO and LiCF3SO3 as examples, the enhancement of immiscibility between the ion-dissolving block and the inert block by doping lithium salts into the copolymer is examined by using the MD-SCF method. By employing GPU-acceleration, the high performance of the MD-SCF method with explicit treatment of electrostatics facilitates the simulation study of many problems involving polyelectrolytes.

  9. A Novel Multilayered RFID Tagged Cargo Integrity Assurance Scheme

    PubMed Central

    Yang, Ming Hour; Luo, Jia Ning; Lu, Shao Yong

    2015-01-01

    To minimize cargo theft during transport, mobile radio frequency identification (RFID) grouping proof methods are generally employed to ensure the integrity of entire cargo loads. However, conventional grouping proofs cannot simultaneously generate grouping proofs for a specific group of RFID tags. The most serious problem of these methods is that nonexistent tags are included in the grouping proofs because of the considerable amount of time it takes to scan a high number of tags. Thus, applying grouping proof methods in the current logistics industry is difficult. To solve this problem, this paper proposes a method for generating multilayered offline grouping proofs. The proposed method provides tag anonymity; moreover, resolving disputes between recipients and transporters over the integrity of cargo deliveries can be expedited by generating grouping proofs and automatically authenticating the consistency between the receipt proof and pick proof. The proposed method can also protect against replay attacks, multi-session attacks, and concurrency attacks. Finally, experimental results verify that, compared with other methods for generating grouping proofs, the proposed method can efficiently generate offline grouping proofs involving several parties in a supply chain using mobile RFID. PMID:26512673

  10. Determination of metals in coal fly ashes using ultrasound-assisted digestion followed by inductively coupled plasma optical emission spectrometry.

    PubMed

    Pontes, Fernanda V M; Mendes, Bruna A de O; de Souza, Evelyn M F; Ferreira, Fernanda N; da Silva, Lílian I D; Carneiro, Manuel C; Monteiro, Maria I C; de Almeida, Marcelo D; Neto, Arnaldo A; Vaitsman, Delmo S

    2010-02-05

    A method for determination of Co, Cr, Cu, Fe, Mn, Ni, Ti, V and Zn in coal fly ash samples using ultrasound-assisted digestion followed by inductively coupled plasma optical emission spectrometry (ICP-OES) is proposed. The digestion procedure consisted in the sonication of the previously dried sample with hydrofluoric acid and aqua regia at 80 degrees C for 30 min, elimination of fluorides by heating until dryness for about 1h and dissolution of the residue with nitric acid solution. A classical digestion method, used as comparative method, consisted in the addition of HCl, HNO(3) and HF to 1 g of sample, and heating on a hot plate until dryness for about 6h. The proposed method presents several advantages: it requires lower amounts of sample and reagents, and it is faster. It is also advantageous when compared to the published methods, which also use ultrasound-assisted digestion procedure: lower detection limits for Co, Cu, Ni, V and Zn, and it does not require shaking during the digestion. The detection limits (microg g(-1)) for Co, Cr, Cu, Fe, Mn, Ni, Ti, V and Zn were 0.06, 0.37, 1.0, 25, 0.93, 0.45, 4.0, 1.7 and 4.3, respectively. The results were in good agreement with those obtained by the classical method and reference values. The exception was Cr, which presented low recoveries in classical and proposed methods (83 and 87%, respectively). Also, the concentration for Cu obtained by the proposed method was significantly different from the reference value, in spite of the good recovery (91+/-1%). Copyright 2009 Elsevier B.V. All rights reserved.

  11. Portable system of programmable syringe pump with potentiometer for determination of promethazine in pharmaceutical applications.

    PubMed

    Saleh, Tawfik A; Abulkibash, A M; Ibrahim, Atta E

    2012-04-01

    A simple and fast-automated method was developed and validated for the assay of promethazine hydrochloride in pharmaceutical formulations, based on the oxidation of promethazine by cerium in an acidic medium. A portable system, consisting of a programmable syringe pump connected to a potentiometer, was constructed. The developed change in potential during promethazine oxidation was monitored. The related optimum working conditions, such as supporting electrolyte concentration, cerium(IV) concentration and flow rate were optimized. The proposed method was successfully applied to pharmaceutical samples as well as synthetic ones. The obtained results were realized by the official British pharmacopoeia (BP) method and comparable results were obtained. The obtained t-value indicates no significant differences between the results of the proposed and BP methods, with the advantages of the proposed method being simple, sensitive and cost effective.

  12. A New Shape Description Method Using Angular Radial Transform

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Min; Kim, Whoi-Yul

    Shape is one of the primary low-level image features in content-based image retrieval. In this paper we propose a new shape description method that consists of a rotationally invariant angular radial transform descriptor (IARTD). The IARTD is a feature vector that combines the magnitude and aligned phases of the angular radial transform (ART) coefficients. A phase correction scheme is employed to produce the aligned phase so that the IARTD is invariant to rotation. The distance between two IARTDs is defined by combining differences in the magnitudes and aligned phases. In an experiment using the MPEG-7 shape dataset, the proposed method outperforms existing methods; the average BEP of the proposed method is 57.69%, while the average BEPs of the invariant Zernike moments descriptor and the traditional ART are 41.64% and 36.51%, respectively.

  13. Multiscale spatial and temporal estimation of the b-value

    NASA Astrophysics Data System (ADS)

    García-Hernández, R.; D'Auria, L.; Barrancos, J.; Padilla, G.

    2017-12-01

    The estimation of the spatial and temporal variations of the Gutenberg-Richter b-value is of great importance in different seismological applications. One of the problems affecting its estimation is the heterogeneous distribution of the seismicity which makes its estimate strongly dependent upon the selected spatial and/or temporal scale. This is especially important in volcanoes where dense clusters of earthquakes often overlap the background seismicity. Proposed solutions for estimating temporal variations of the b-value include considering equally spaced time intervals or variable intervals having an equal number of earthquakes. Similar approaches have been proposed to image the spatial variations of this parameter as well.We propose a novel multiscale approach, based on the method of Ogata and Katsura (1993), allowing a consistent estimation of the b-value regardless of the considered spatial and/or temporal scales. Our method, named MUST-B (MUltiscale Spatial and Temporal characterization of the B-value), basically consists in computing estimates of the b-value at multiple temporal and spatial scales, extracting for a give spatio-temporal point a statistical estimator of the value, as well as and indication of the characteristic spatio-temporal scale. This approach includes also a consistent estimation of the completeness magnitude (Mc) and of the uncertainties over both b and Mc.We applied this method to example datasets for volcanic (Tenerife, El Hierro) and tectonic areas (Central Italy) as well as an example application at global scale.

  14. Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.

  15. Fast image interpolation via random forests.

    PubMed

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  16. Design of Passive Power Filter for Hybrid Series Active Power Filter using Estimation, Detection and Classification Method

    NASA Astrophysics Data System (ADS)

    Swain, Sushree Diptimayee; Ray, Pravat Kumar; Mohanty, K. B.

    2016-06-01

    This research paper discover the design of a shunt Passive Power Filter (PPF) in Hybrid Series Active Power Filter (HSAPF) that employs a novel analytic methodology which is superior than FFT analysis. This novel approach consists of the estimation, detection and classification of the signals. The proposed method is applied to estimate, detect and classify the power quality (PQ) disturbance such as harmonics. This proposed work deals with three methods: the harmonic detection through wavelet transform method, the harmonic estimation by Kalman Filter algorithm and harmonic classification by decision tree method. From different type of mother wavelets in wavelet transform method, the db8 is selected as suitable mother wavelet because of its potency on transient response and crouched oscillation at frequency domain. In harmonic compensation process, the detected harmonic is compensated through Hybrid Series Active Power Filter (HSAPF) based on Instantaneous Reactive Power Theory (IRPT). The efficacy of the proposed method is verified in MATLAB/SIMULINK domain and as well as with an experimental set up. The obtained results confirm the superiority of the proposed methodology than FFT analysis. This newly proposed PPF is used to make the conventional HSAPF more robust and stable.

  17. Evaluation of Beckman Coulter DxI 800 immunoassay system using clinically oriented performance goals.

    PubMed

    Akbas, Neval; Schryver, Patricia G; Algeciras-Schimnich, Alicia; Baumann, Nikola A; Block, Darci R; Budd, Jeffrey R; Gaston, S J Stephen; Klee, George G

    2014-11-01

    We evaluated the analytical performance of 24 immunoassays using the Beckman Coulter DxI 800 immunoassay systems at Mayo Clinic, Rochester, MN for trueness, precision, detection limits, linearity, and consistency (across instruments and reagent lots). Clinically oriented performance goals were defined using the following methods: trueness-published desirable accuracy limits, precision-published desirable biologic variation; detection limits - 0.1 percentile of patient test values, linearity - 50% of total error, and consistency-percentage test values crossing key decision points. Local data were collected for precision, linearity, and consistency. Data were provided by Beckman Coulter, Inc. for trueness and detection limits. All evaluated assays except total thyroxine were within the proposed goals for trueness. Most of the assays met the proposed goals for precision (86% of intra-assay results and 75% of inter-assay results). Five assays had more than 15% of the test results below the minimum detection limits. Carcinoembryonic antigen, total thyroxine and free triiodothyronine exceeded the proposed goals of ±6.3%, ±5% and ±5.7% for dilution linearity. All evaluated assays were within the proposed goals for instrument consistency. Lot-to-lot consistency results for cortisol, ferritin and total thyroxine exceeded the proposed goals of 3.3%, 11.4% and 7% at one medical decision level, while vitamin B12 exceeded the proposed goals of 5.2% and 3.8% at two decision levels. The Beckman Coulter DxI 800 immunoassay system meets most of these proposed goals, even though these clinically focused performance goals represent relatively stringent limits. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  18. Robust finger vein ROI localization based on flexible segmentation.

    PubMed

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-10-24

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.

  19. Robust Finger Vein ROI Localization Based on Flexible Segmentation

    PubMed Central

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-01-01

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769

  20. Latent component-based gear tooth fault detection filter using advanced parametric modeling

    NASA Astrophysics Data System (ADS)

    Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.

    2009-10-01

    In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.

  1. Superconvergent second order Cartesian method for solving free boundary problem for invadopodia formation

    NASA Astrophysics Data System (ADS)

    Gallinato, Olivier; Poignard, Clair

    2017-06-01

    In this paper, we present a superconvergent second order Cartesian method to solve a free boundary problem with two harmonic phases coupled through the moving interface. The model recently proposed by the authors and colleagues describes the formation of cell protrusions. The moving interface is described by a level set function and is advected at the velocity given by the gradient of the inner phase. The finite differences method proposed in this paper consists of a new stabilized ghost fluid method and second order discretizations for the Laplace operator with the boundary conditions (Dirichlet, Neumann or Robin conditions). Interestingly, the method to solve the harmonic subproblems is superconvergent on two levels, in the sense that the first and second order derivatives of the numerical solutions are obtained with the second order of accuracy, similarly to the solution itself. We exhibit numerical criteria on the data accuracy to get such properties and numerical simulations corroborate these criteria. In addition to these properties, we propose an appropriate extension of the velocity of the level-set to avoid any loss of consistency, and to obtain the second order of accuracy of the complete free boundary problem. Interestingly, we highlight the transmission of the superconvergent properties for the static subproblems and their preservation by the dynamical scheme. Our method is also well suited for quasistatic Hele-Shaw-like or Muskat-like problems.

  2. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  3. A Hybrid 3D Indoor Space Model

    NASA Astrophysics Data System (ADS)

    Jamali, Ali; Rahman, Alias Abdul; Boguslawski, Pawel

    2016-10-01

    GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM), Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.

  4. Structural reliability calculation method based on the dual neural network and direct integration method.

    PubMed

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  5. Multimodal Event Detection in Twitter Hashtag Networks

    DOE PAGES

    Yilmaz, Yasin; Hero, Alfred O.

    2016-07-01

    In this study, event detection in a multimodal Twitter dataset is considered. We treat the hashtags in the dataset as instances with two modes: text and geolocation features. The text feature consists of a bag-of-words representation. The geolocation feature consists of geotags (i.e., geographical coordinates) of the tweets. Fusing the multimodal data we aim to detect, in terms of topic and geolocation, the interesting events and the associated hashtags. To this end, a generative latent variable model is assumed, and a generalized expectation-maximization (EM) algorithm is derived to learn the model parameters. The proposed method is computationally efficient, and lendsmore » itself to big datasets. Lastly, experimental results on a Twitter dataset from August 2014 show the efficacy of the proposed method.« less

  6. An information theory criteria based blind method for enumerating active users in DS-CDMA system

    NASA Astrophysics Data System (ADS)

    Samsami Khodadad, Farid; Abed Hodtani, Ghosheh

    2014-11-01

    In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.

  7. A modified belief entropy in Dempster-Shafer framework.

    PubMed

    Zhou, Deyun; Tang, Yongchuan; Jiang, Wen

    2017-01-01

    How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What's more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method.

  8. A modified belief entropy in Dempster-Shafer framework

    PubMed Central

    Zhou, Deyun; Jiang, Wen

    2017-01-01

    How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What’s more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method. PMID:28481914

  9. Centrifugal compressor fault diagnosis based on qualitative simulation and thermal parameters

    NASA Astrophysics Data System (ADS)

    Lu, Yunsong; Wang, Fuli; Jia, Mingxing; Qi, Yuanchen

    2016-12-01

    This paper concerns fault diagnosis of centrifugal compressor based on thermal parameters. An improved qualitative simulation (QSIM) based fault diagnosis method is proposed to diagnose the faults of centrifugal compressor in a gas-steam combined-cycle power plant (CCPP). The qualitative models under normal and two faulty conditions have been built through the analysis of the principle of centrifugal compressor. To solve the problem of qualitative description of the observations of system variables, a qualitative trend extraction algorithm is applied to extract the trends of the observations. For qualitative states matching, a sliding window based matching strategy which consists of variables operating ranges constraints and qualitative constraints is proposed. The matching results are used to determine which QSIM model is more consistent with the running state of system. The correct diagnosis of two typical faults: seal leakage and valve stuck in the centrifugal compressor has validated the targeted performance of the proposed method, showing the advantages of fault roots containing in thermal parameters.

  10. Consistent maximum entropy representations of pipe flow networks

    NASA Astrophysics Data System (ADS)

    Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael

    2017-06-01

    The maximum entropy method is used to predict flows on water distribution networks. This analysis extends the water distribution network formulation of Waldrip et al. (2016) Journal of Hydraulic Engineering (ASCE), by the use of a continuous relative entropy defined on a reduced parameter set. This reduction in the parameters that the entropy is defined over ensures consistency between different representations of the same network. The performance of the proposed reduced parameter method is demonstrated with a one-loop network case study.

  11. Finger Vein Recognition Based on a Personalized Best Bit Map

    PubMed Central

    Yang, Gongping; Xi, Xiaoming; Yin, Yilong

    2012-01-01

    Finger vein patterns have recently been recognized as an effective biometric identifier. In this paper, we propose a finger vein recognition method based on a personalized best bit map (PBBM). Our method is rooted in a local binary pattern based method and then inclined to use the best bits only for matching. We first present the concept of PBBM and the generating algorithm. Then we propose the finger vein recognition framework, which consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PBBM achieves not only better performance, but also high robustness and reliability. In addition, PBBM can be used as a general framework for binary pattern based recognition. PMID:22438735

  12. Instantaneous Coastline Extraction from LIDAR Point Cloud and High Resolution Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Li, Y.; Zhoing, L.; Lai, Z.; Gan, Z.

    2018-04-01

    A new method was proposed for instantaneous waterline extraction in this paper, which combines point cloud geometry features and image spectral characteristics of the coastal zone. The proposed method consists of follow steps: Mean Shift algorithm is used to segment the coastal zone of high resolution remote sensing images into small regions containing semantic information;Region features are extracted by integrating LiDAR data and the surface area of the image; initial waterlines are extracted by α-shape algorithm; a region growing algorithm with is taking into coastline refinement, with a growth rule integrating the intensity and topography of LiDAR data; moothing the coastline. Experiments are conducted to demonstrate the efficiency of the proposed method.

  13. Finger vein recognition based on a personalized best bit map.

    PubMed

    Yang, Gongping; Xi, Xiaoming; Yin, Yilong

    2012-01-01

    Finger vein patterns have recently been recognized as an effective biometric identifier. In this paper, we propose a finger vein recognition method based on a personalized best bit map (PBBM). Our method is rooted in a local binary pattern based method and then inclined to use the best bits only for matching. We first present the concept of PBBM and the generating algorithm. Then we propose the finger vein recognition framework, which consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PBBM achieves not only better performance, but also high robustness and reliability. In addition, PBBM can be used as a general framework for binary pattern based recognition.

  14. Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard

    PubMed Central

    Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton

    2017-01-01

    The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385

  15. A proposed standard methodology for estimating the wounding capacity of small calibre projectiles or other missiles.

    PubMed

    Berlin, R H; Janzon, B; Rybeck, B; Schantz, B; Seeman, T

    1982-01-01

    A standard methodology for estimating the energy transfer characteristics of small calibre bullets and other fast missiles is proposed, consisting of firings against targets made of soft soap. The target is evaluated by measuring the size of the permanent cavity remaining in it after the shot. The method is very simple to use and does not require access to any sophisticated measuring equipment. It can be applied under all circumstances, even under field conditions. Adequate methods of calibration to ensure good accuracy are suggested. The precision and limitations of the method are discussed.

  16. Partitioning of functional gene expression data using principal points.

    PubMed

    Kim, Jaehee; Kim, Haseong

    2017-10-12

    DNA microarrays offer motivation and hope for the simultaneous study of variations in multiple genes. Gene expression is a temporal process that allows variations in expression levels with a characterized gene function over a period of time. Temporal gene expression curves can be treated as functional data since they are considered as independent realizations of a stochastic process. This process requires appropriate models to identify patterns of gene functions. The partitioning of the functional data can find homogeneous subgroups of entities for the massive genes within the inherent biological networks. Therefor it can be a useful technique for the analysis of time-course gene expression data. We propose a new self-consistent partitioning method of functional coefficients for individual expression profiles based on the orthonormal basis system. A principal points based functional partitioning method is proposed for time-course gene expression data. The method explores the relationship between genes using Legendre coefficients as principal points to extract the features of gene functions. Our proposed method provides high connectivity in connectedness after clustering for simulated data and finds a significant subsets of genes with the increased connectivity. Our approach has comparative advantages that fewer coefficients are used from the functional data and self-consistency of principal points for partitioning. As real data applications, we are able to find partitioned genes through the gene expressions found in budding yeast data and Escherichia coli data. The proposed method benefitted from the use of principal points, dimension reduction, and choice of orthogonal basis system as well as provides appropriately connected genes in the resulting subsets. We illustrate our method by applying with each set of cell-cycle-regulated time-course yeast genes and E. coli genes. The proposed method is able to identify highly connected genes and to explore the complex dynamics of biological systems in functional genomics.

  17. Real-time line matching from stereo images using a nonparametric transform of spatial relations and texture information

    NASA Astrophysics Data System (ADS)

    Park, Jonghee; Yoon, Kuk-Jin

    2015-02-01

    We propose a real-time line matching method for stereo systems. To achieve real-time performance while retaining a high level of matching precision, we first propose a nonparametric transform to represent the spatial relations between neighboring lines and nearby textures as a binary stream. Since the length of a line can vary across images, the matching costs between lines are computed within an overlap area (OA) based on the binary stream. The OA is determined for each line pair by employing the properties of a rectified image pair. Finally, the line correspondence is determined using a winner-takes-all method with a left-right consistency check. To reduce the computational time requirements further, we filter out unreliable matching candidates in advance based on their rectification properties. The performance of the proposed method was compared with state-of-the-art methods in terms of the computational time, matching precision, and recall. The proposed method required 47 ms to match lines from an image pair in the KITTI dataset with an average precision of 95%. We also verified the proposed method under image blur, illumination variation, and viewpoint changes.

  18. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  19. Improving resolution of MR images with an adversarial network incorporating images with different contrast.

    PubMed

    Kim, Ki Hwan; Do, Won-Joon; Park, Sung-Hong

    2018-05-04

    The routine MRI scan protocol consists of multiple pulse sequences that acquire images of varying contrast. Since high frequency contents such as edges are not significantly affected by image contrast, down-sampled images in one contrast may be improved by high resolution (HR) images acquired in another contrast, reducing the total scan time. In this study, we propose a new deep learning framework that uses HR MR images in one contrast to generate HR MR images from highly down-sampled MR images in another contrast. The proposed convolutional neural network (CNN) framework consists of two CNNs: (a) a reconstruction CNN for generating HR images from the down-sampled images using HR images acquired with a different MRI sequence and (b) a discriminator CNN for improving the perceptual quality of the generated HR images. The proposed method was evaluated using a public brain tumor database and in vivo datasets. The performance of the proposed method was assessed in tumor and no-tumor cases separately, with perceptual image quality being judged by a radiologist. To overcome the challenge of training the network with a small number of available in vivo datasets, the network was pretrained using the public database and then fine-tuned using the small number of in vivo datasets. The performance of the proposed method was also compared to that of several compressed sensing (CS) algorithms. Incorporating HR images of another contrast improved the quantitative assessments of the generated HR image in reference to ground truth. Also, incorporating a discriminator CNN yielded perceptually higher image quality. These results were verified in regions of normal tissue as well as tumors for various MRI sequences from pseudo k-space data generated from the public database. The combination of pretraining with the public database and fine-tuning with the small number of real k-space datasets enhanced the performance of CNNs in in vivo application compared to training CNNs from scratch. The proposed method outperformed the compressed sensing methods. The proposed method can be a good strategy for accelerating routine MRI scanning. © 2018 American Association of Physicists in Medicine.

  20. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  1. A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods

    PubMed Central

    Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016

  2. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study.

    PubMed

    Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z

    2014-01-01

    Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  3. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    PubMed Central

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  4. Stepwise and stagewise approaches for spatial cluster detection

    PubMed Central

    Xu, Jiale

    2016-01-01

    Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either hypothesis testing framework or Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic area. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power of detections. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. PMID:27246273

  5. Stepwise and stagewise approaches for spatial cluster detection.

    PubMed

    Xu, Jiale; Gangnon, Ronald E

    2016-05-01

    Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either a hypothesis testing framework or a Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with a tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic areas. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Real-time anomaly detection for very short-term load forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Jian; Hong, Tao; Yue, Meng

    Although the recent load information is critical to very short-term load forecasting (VSTLF), power companies often have difficulties in collecting the most recent load values accurately and timely for VSTLF applications. This paper tackles the problem of real-time anomaly detection in most recent load information used by VSTLF. This paper proposes a model-based anomaly detection method that consists of two components, a dynamic regression model and an adaptive anomaly threshold. The case study is developed using the data from ISO New England. This paper demonstrates that the proposed method significantly outperforms three other anomaly detection methods including two methods commonlymore » used in the field and one state-of-the-art method used by a winning team of the Global Energy Forecasting Competition 2014. Lastly, a general anomaly detection framework is proposed for the future research.« less

  7. Real-time anomaly detection for very short-term load forecasting

    DOE PAGES

    Luo, Jian; Hong, Tao; Yue, Meng

    2018-01-06

    Although the recent load information is critical to very short-term load forecasting (VSTLF), power companies often have difficulties in collecting the most recent load values accurately and timely for VSTLF applications. This paper tackles the problem of real-time anomaly detection in most recent load information used by VSTLF. This paper proposes a model-based anomaly detection method that consists of two components, a dynamic regression model and an adaptive anomaly threshold. The case study is developed using the data from ISO New England. This paper demonstrates that the proposed method significantly outperforms three other anomaly detection methods including two methods commonlymore » used in the field and one state-of-the-art method used by a winning team of the Global Energy Forecasting Competition 2014. Lastly, a general anomaly detection framework is proposed for the future research.« less

  8. Portable system of programmable syringe pump with potentiometer for determination of promethazine in pharmaceutical applications

    PubMed Central

    Saleh, Tawfik A.; Abulkibash, A.M.; Ibrahim, Atta E.

    2011-01-01

    A simple and fast-automated method was developed and validated for the assay of promethazine hydrochloride in pharmaceutical formulations, based on the oxidation of promethazine by cerium in an acidic medium. A portable system, consisting of a programmable syringe pump connected to a potentiometer, was constructed. The developed change in potential during promethazine oxidation was monitored. The related optimum working conditions, such as supporting electrolyte concentration, cerium(IV) concentration and flow rate were optimized. The proposed method was successfully applied to pharmaceutical samples as well as synthetic ones. The obtained results were realized by the official British pharmacopoeia (BP) method and comparable results were obtained. The obtained t-value indicates no significant differences between the results of the proposed and BP methods, with the advantages of the proposed method being simple, sensitive and cost effective. PMID:23960787

  9. Development of a classification method for a crack on a pavement surface images using machine learning

    NASA Astrophysics Data System (ADS)

    Hizukuri, Akiyoshi; Nagata, Takeshi

    2017-03-01

    The purpose of this study is to develop a classification method for a crack on a pavement surface image using machine learning to reduce a maintenance fee. Our database consists of 3500 pavement surface images. This includes 800 crack and 2700 normal pavement surface images. The pavement surface images first are decomposed into several sub-images using a discrete wavelet transform (DWT) decomposition. We then calculate the wavelet sub-band histogram from each several sub-images at each level. The support vector machine (SVM) with computed wavelet sub-band histogram is employed for distinguishing between a crack and normal pavement surface images. The accuracies of the proposed classification method are 85.3% for crack and 84.4% for normal pavement images. The proposed classification method achieved high performance. Therefore, the proposed method would be useful in maintenance inspection.

  10. Statistical analysis of loopy belief propagation in random fields

    NASA Astrophysics Data System (ADS)

    Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki

    2015-10-01

    Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.

  11. Measurement of the Microwave Refractive Index of Materials Based on Parallel Plate Waveguides

    NASA Astrophysics Data System (ADS)

    Zhao, F.; Pei, J.; Kan, J. S.; Zhao, Q.

    2017-12-01

    An electrical field scanning apparatus based on a parallel plate waveguide method is constructed, which collects the amplitude and phase matrices as a function of the relative position. On the basis of such data, a method for calculating the refractive index of the measured wedge samples is proposed in this paper. The measurement and calculation results of different PTFE samples reveal that the refractive index measured by the apparatus is substantially consistent with the refractive index inferred with the permittivity of the sample. The proposed refractive index calculation method proposed in this paper is a competitive method for the characterization of the refractive index of materials with positive refractive index. Since the apparatus and method can be used to measure and calculate arbitrary direction of the microwave propagation, it is believed that both of them can be applied to the negative refractive index materials, such as metamaterials or “left-handed” materials.

  12. A robust two-way semi-linear model for normalization of cDNA microarray data

    PubMed Central

    Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento

    2005-01-01

    Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789

  13. Supervised group Lasso with applications to microarray data analysis

    PubMed Central

    Ma, Shuangge; Song, Xiao; Huang, Jian

    2007-01-01

    Background A tremendous amount of efforts have been devoted to identifying genes for diagnosis and prognosis of diseases using microarray gene expression data. It has been demonstrated that gene expression data have cluster structure, where the clusters consist of co-regulated genes which tend to have coordinated functions. However, most available statistical methods for gene selection do not take into consideration the cluster structure. Results We propose a supervised group Lasso approach that takes into account the cluster structure in gene expression data for gene selection and predictive model building. For gene expression data without biological cluster information, we first divide genes into clusters using the K-means approach and determine the optimal number of clusters using the Gap method. The supervised group Lasso consists of two steps. In the first step, we identify important genes within each cluster using the Lasso method. In the second step, we select important clusters using the group Lasso. Tuning parameters are determined using V-fold cross validation at both steps to allow for further flexibility. Prediction performance is evaluated using leave-one-out cross validation. We apply the proposed method to disease classification and survival analysis with microarray data. Conclusion We analyze four microarray data sets using the proposed approach: two cancer data sets with binary cancer occurrence as outcomes and two lymphoma data sets with survival outcomes. The results show that the proposed approach is capable of identifying a small number of influential gene clusters and important genes within those clusters, and has better prediction performance than existing methods. PMID:17316436

  14. Metabolic network visualization eliminating node redundance and preserving metabolic pathways

    PubMed Central

    Bourqui, Romain; Cottret, Ludovic; Lacroix, Vincent; Auber, David; Mary, Patrick; Sagot, Marie-France; Jourdan, Fabien

    2007-01-01

    Background The tools that are available to draw and to manipulate the representations of metabolism are usually restricted to metabolic pathways. This limitation becomes problematic when studying processes that span several pathways. The various attempts that have been made to draw genome-scale metabolic networks are confronted with two shortcomings: 1- they do not use contextual information which leads to dense, hard to interpret drawings, 2- they impose to fit to very constrained standards, which implies, in particular, duplicating nodes making topological analysis considerably more difficult. Results We propose a method, called MetaViz, which enables to draw a genome-scale metabolic network and that also takes into account its structuration into pathways. This method consists in two steps: a clustering step which addresses the pathway overlapping problem and a drawing step which consists in drawing the clustered graph and each cluster. Conclusion The method we propose is original and addresses new drawing issues arising from the no-duplication constraint. We do not propose a single drawing but rather several alternative ways of presenting metabolism depending on the pathway on which one wishes to focus. We believe that this provides a valuable tool to explore the pathway structure of metabolism. PMID:17608928

  15. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  16. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  17. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  18. A breast-specific, negligible-dose scatter correction technique for dedicated cone-beam breast CT: a physics-based approach to improve Hounsfield Unit accuracy

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Burkett, George, Jr.; Boone, John M.

    2014-11-01

    The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.

  19. An automatic iterative decision-making method for intuitionistic fuzzy linguistic preference relations

    NASA Astrophysics Data System (ADS)

    Pei, Lidan; Jin, Feifei; Ni, Zhiwei; Chen, Huayou; Tao, Zhifu

    2017-10-01

    As a new preference structure, the intuitionistic fuzzy linguistic preference relation (IFLPR) was recently introduced to efficiently deal with situations in which the membership and non-membership are represented as linguistic terms. In this paper, we study the issues of additive consistency and the derivation of the intuitionistic fuzzy weight vector of an IFLPR. First, the new concepts of order consistency, additive consistency and weak transitivity for IFLPRs are introduced, and followed by a discussion of the characterisation about additive consistent IFLPRs. Then, a parameterised transformation approach is investigated to convert the normalised intuitionistic fuzzy weight vector into additive consistent IFLPRs. After that, a linear optimisation model is established to derive the normalised intuitionistic fuzzy weights for IFLPRs, and a consistency index is defined to measure the deviation degree between an IFLPR and its additive consistent IFLPR. Furthermore, we develop an automatic iterative decision-making method to improve the IFLPRs with unacceptable additive consistency until the adjusted IFLPRs are acceptable additive consistent, and it helps the decision-maker to obtain the reasonable and reliable decision-making results. Finally, an illustrative example is provided to demonstrate the validity and applicability of the proposed method.

  20. Fine Output Voltage Control Method considering Time-Delay of Digital Inverter System for X-ray Computed Tomography

    NASA Astrophysics Data System (ADS)

    Shibata, Junji; Kaneko, Kazuhide; Ohishi, Kiyoshi; Ando, Itaru; Ogawa, Mina; Takano, Hiroshi

    This paper proposes a new output voltage control for an inverter system, which has time-delay and nonlinear load. In the next generation X-ray computed tomography of a medical device (X-ray CT) that uses the contactless power transfer method, the feedback signal often contains time-delay due to AD/DA conversion and error detection/correction time. When the PID controller of the inverter system is received the adverse effects of the time-delay, the controller often has an overshoot and a oscillated response. In order to overcome this problem, this paper proposes a compensation method based on the Smith predictor for an inverter system having a time-delay and the nonlinear loads which are the diode bridge rectifier and X-ray tube. The proposed compensation method consists of the hybrid Smith predictor system based on an equivalent analog circuit and DSP. The experimental results confirm the validity of the proposed system.

  1. Automatic choroid cells segmentation and counting based on approximate convexity and concavity of chain code in fluorescence microscopic image

    NASA Astrophysics Data System (ADS)

    Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu

    2015-03-01

    In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.

  2. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  3. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  4. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    PubMed

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. Possible world based consistency learning model for clustering and classifying uncertain data.

    PubMed

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Addressable multi-nozzle electrohydrodynamic jet printing with high consistency by multi-level voltage method

    NASA Astrophysics Data System (ADS)

    Pan, Yanqiao; Huang, YongAn; Guo, Lei; Ding, Yajiang; Yin, Zhouping

    2015-04-01

    It is critical and challenging to achieve the individual jetting ability and high consistency in multi-nozzle electrohydrodynamic jet printing (E-jet printing). We proposed multi-level voltage method (MVM) to implement the addressable E-jet printing using multiple parallel nozzles with high consistency. The fabricated multi-nozzle printhead for MVM consists of three parts: PMMA holder, stainless steel capillaries (27G, outer diameter 400 μm) and FR-4 extractor layer. The key of MVM is to control the maximum meniscus electric field on each nozzle. The individual jetting control can be implemented when the rings under the jetting nozzles are 0 kV and the other rings are 0.5 kV. The onset electric field for each nozzle is ˜3.4 kV/mm by numerical simulation. Furthermore, a series of printing experiments are performed to show the advantage of MVM in printing consistency than the "one-voltage method" and "improved E-jet method", by combination with finite element analyses. The good dimension consistency (274μm, 276μm, 280μm) and position consistency of the droplet array on the hydrophobic Si substrate verified the enhancements. It shows that MVM is an effective technique to implement the addressable E-jet printing with multiple parallel nozzles in high consistency.

  7. Model-based document categorization employing semantic pattern analysis and local structure clustering

    NASA Astrophysics Data System (ADS)

    Fume, Kosei; Ishitani, Yasuto

    2008-01-01

    We propose a document categorization method based on a document model that can be defined externally for each task and that categorizes Web content or business documents into a target category in accordance with the similarity of the model. The main feature of the proposed method consists of two aspects of semantics extraction from an input document. The semantics of terms are extracted by the semantic pattern analysis and implicit meanings of document substructure are specified by a bottom-up text clustering technique focusing on the similarity of text line attributes. We have constructed a system based on the proposed method for trial purposes. The experimental results show that the system achieves more than 80% classification accuracy in categorizing Web content and business documents into 15 or 70 categories.

  8. Sieve estimation of Cox models with latent structures.

    PubMed

    Cao, Yongxiu; Huang, Jian; Liu, Yanyan; Zhao, Xingqiu

    2016-12-01

    This article considers sieve estimation in the Cox model with an unknown regression structure based on right-censored data. We propose a semiparametric pursuit method to simultaneously identify and estimate linear and nonparametric covariate effects based on B-spline expansions through a penalized group selection method with concave penalties. We show that the estimators of the linear effects and the nonparametric component are consistent. Furthermore, we establish the asymptotic normality of the estimator of the linear effects. To compute the proposed estimators, we develop a modified blockwise majorization descent algorithm that is efficient and easy to implement. Simulation studies demonstrate that the proposed method performs well in finite sample situations. We also use the primary biliary cirrhosis data to illustrate its application. © 2016, The International Biometric Society.

  9. Novel Imaging Method of Continuous Shear Wave by Ultrasonic Color Flow Mapping

    NASA Astrophysics Data System (ADS)

    Yamakoshi, Yoshiki; Yamamoto, Atsushi; Yuminaka, Yasushi

    Shear wave velocity measurement is a promising method in evaluation of tissue stiffness. Several methods have been developed to measure the shear wave velocity, however, it is difficult to obtain quantitative shear wave image in real-time by low cost system. In this paper, a novel shear wave imaging method for continuous shear wave is proposed. This method uses a color flow imaging which is used in ultrasonic imaging system to obtain shear wave's wavefront map. Two conditions, shear wave frequency condition and shear wave displacement amplitude condition, are required, however, these conditions are not severe restrictions in most applications. Using the proposed method, shear wave velocity of trapezius muscle is measured. The result is consistent with the velocity which is calculated from shear elastic modulus measured by ARFI method.

  10. Wide baseline stereo matching based on double topological relationship consistency

    NASA Astrophysics Data System (ADS)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  11. Novel method for detecting the hadronic component of extensive air showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gromushkin, D. M., E-mail: DMGromushkin@mephi.ru; Volchenko, V. I.; Petrukhin, A. A.

    2015-05-15

    A novel method for studying the hadronic component of extensive air showers (EAS) is proposed. The method is based on recording thermal neutrons accompanying EAS with en-detectors that are sensitive to two EAS components: an electromagnetic (e) component and a hadron component in the form of neutrons (n). In contrast to hadron calorimeters used in some arrays, the proposed method makes it possible to record the hadronic component over the whole area of the array. The efficiency of a prototype array that consists of 32 en-detectors was tested for a long time, and some parameters of the neutron EAS componentmore » were determined.« less

  12. Robust phase retrieval of complex-valued object in phase modulation by hybrid Wirtinger flow method

    NASA Astrophysics Data System (ADS)

    Wei, Zhun; Chen, Wen; Yin, Tiantian; Chen, Xudong

    2017-09-01

    This paper presents a robust iterative algorithm, known as hybrid Wirtinger flow (HWF), for phase retrieval (PR) of complex objects from noisy diffraction intensities. Numerical simulations indicate that the HWF method consistently outperforms conventional PR methods in terms of both accuracy and convergence rate in multiple phase modulations. The proposed algorithm is also more robust to low oversampling ratios, loose constraints, and noisy environments. Furthermore, compared with traditional Wirtinger flow, sample complexity is largely reduced. It is expected that the proposed HWF method will find applications in the rapidly growing coherent diffractive imaging field for high-quality image reconstruction with multiple modulations, as well as other disciplines where PR is needed.

  13. The development of a revised version of multi-center molecular Ornstein-Zernike equation

    NASA Astrophysics Data System (ADS)

    Kido, Kentaro; Yokogawa, Daisuke; Sato, Hirofumi

    2012-04-01

    Ornstein-Zernike (OZ)-type theory is a powerful tool to obtain 3-dimensional solvent distribution around solute molecule. Recently, we proposed multi-center molecular OZ method, which is suitable for parallel computing of 3D solvation structure. The distribution function in this method consists of two components, namely reference and residue parts. Several types of the function were examined as the reference part to investigate the numerical robustness of the method. As the benchmark, the method is applied to water, benzene in aqueous solution and single-walled carbon nanotube in chloroform solution. The results indicate that fully-parallelization is achieved by utilizing the newly proposed reference functions.

  14. Aveiro method in reproducing kernel Hilbert spaces under complete dictionary

    NASA Astrophysics Data System (ADS)

    Mai, Weixiong; Qian, Tao

    2017-12-01

    Aveiro Method is a sparse representation method in reproducing kernel Hilbert spaces (RKHS) that gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying RKHS. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro Method. To avoid those difficulties we propose an anew Aveiro Method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so called Pre-Orthogonal Greedy Algorithm (P-OGA) involving completion of a given dictionary. The new method is called Aveiro Method Under Complete Dictionary (AMUCD). The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition, bring available for the classical Hardy and Paley-Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro Method and the greedy algorithm.

  15. Determination of Optimal Heat-Storage Thickness of Layer for “Smart Wall” by Methods of Nonlinear Heat Conduction Equations for Phase-transition Materials

    NASA Astrophysics Data System (ADS)

    Pospelova, I.

    2017-11-01

    The article suggests an original way of keeping heat load and its compensation for a microclimate system by proposing the “Smart Wall”. The construction consists of specially combined composite materials including phase-transition materials. The method for determination of the layer thickness is proposed for a certain accumulation time. Varying the thickness and composition of the layer it is possible to achieve a low amount of the thermal conductivity coefficient and to obtain various functional characteristics of fences.

  16. Design and application of a small size SAFT imaging system for concrete structure

    NASA Astrophysics Data System (ADS)

    Shao, Zhixue; Shi, Lihua; Shao, Zhe; Cai, Jian

    2011-07-01

    A method of ultrasonic imaging detection is presented for quick non-destructive testing (NDT) of concrete structures using synthesized aperture focusing technology (SAFT). A low cost ultrasonic sensor array consisting of 12 market available low frequency ultrasonic transducers is designed and manufactured. A channel compensation method is proposed to improve the consistency of different transducers. The controlling devices for array scan as well as the virtual instrument for SAFT imaging are designed. In the coarse scan mode with the scan step of 50 mm, the system can quickly give an image display of a cross section of 600 mm (L) × 300 mm (D) by one measurement. In the refined scan model, the system can reduce the scan step and give an image display of the same cross section by moving the sensor array several times. Experiments on staircase specimen, concrete slab with embedded target, and building floor with underground pipe line all verify the efficiency of the proposed method.

  17. Hydrological change: Towards a consistent approach to assess changes on both floods and droughts

    NASA Astrophysics Data System (ADS)

    Quesada-Montano, Beatriz; Di Baldassarre, Giuliano; Rangecroft, Sally; Van Loon, Anne F.

    2018-01-01

    Several studies have found that the frequency, magnitude and spatio-temporal distribution of droughts and floods have significantly increased in many regions of the world. Yet, most of the methods used in detecting trends in hydrological extremes 1) focus on either floods or droughts, and/or 2) base their assessment on characteristics that, even though useful for trend identification, cannot be directly used in decision making, e.g. integrated water resources management and disaster risk reduction. In this paper, we first discuss the need for a consistent approach to assess changes on both floods and droughts, and then propose a method based on the theory of runs and threshold levels. Flood and drought changes were assessed in terms of frequency, length and surplus/deficit volumes. This paper also presents an example application using streamflow data from two hydrometric stations along the Po River basin (Italy), Piacenza and Pontelagoscuro, and then discuss opportunities and challenges of the proposed method.

  18. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  19. Lunar gravity derived from long-period satellite motion, a proposed method

    NASA Technical Reports Server (NTRS)

    Ferrari, A. J.

    1971-01-01

    A method was devised to determine the spherical harmonic coefficients of the lunar gravity field. The method consists of a two-step data reduction and estimation process. Pseudo-Doppler data were generated simulating two different lunar orbits. The analysis included the perturbing effects of the L1 lunar gravity field, the earth, the sun, and solar radiation pressure. Orbit determinations were performed on these data and long-period orbital elements were obtained. The Kepler element rates from these solutions were used to recover L1 lunar gravity coefficients. Overall results of the experiment show that lunar gravity coefficients can be accurately determined and that the method is dynamically consistent with long-period perturbation theory.

  20. Anisotropy model for modern grain oriented electrical steel based on orientation distribution function

    NASA Astrophysics Data System (ADS)

    Jiang, Fan; Rossi, Mathieu; Parent, Guillaume

    2018-05-01

    Accurately modeling the anisotropic behavior of electrical steel is mandatory in order to perform good end simulations. Several approaches can be found in the literature for that purpose but the more often those methods are not able to deal with grain oriented electrical steel. In this paper, a method based on orientation distribution function is applied to modern grain oriented laminations. In particular, two solutions are proposed in order to increase the results accuracy. The first one consists in increasing the decomposition number of the cosine series on which the method is based. The second one consists in modifying the determination method of the terms belonging to this cosine series.

  1. Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration

    NASA Astrophysics Data System (ADS)

    Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2018-06-01

    In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.

  2. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    2016-06-15

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  3. Synchronization in node of complex networks consist of complex chaotic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Qiang, E-mail: qiangweibeihua@163.com; Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin; Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  4. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  5. Rotor Position Sensorless Control and Its Parameter Sensitivity of Permanent Magnet Motor Based on Model Reference Adaptive System

    NASA Astrophysics Data System (ADS)

    Ohara, Masaki; Noguchi, Toshihiko

    This paper describes a new method for a rotor position sensorless control of a surface permanent magnet synchronous motor based on a model reference adaptive system (MRAS). This method features the MRAS in a current control loop to estimate a rotor speed and position by using only current sensors. This method as well as almost all the conventional methods incorporates a mathematical model of the motor, which consists of parameters such as winding resistances, inductances, and an induced voltage constant. Hence, the important thing is to investigate how the deviation of these parameters affects the estimated rotor position. First, this paper proposes a structure of the sensorless control applied in the current control loop. Next, it proves the stability of the proposed method when motor parameters deviate from the nominal values, and derives the relationship between the estimated position and the deviation of the parameters in a steady state. Finally, some experimental results are presented to show performance and effectiveness of the proposed method.

  6. Simultaneous determination of eight water-soluble vitamins in supplemented foods by liquid chromatography.

    PubMed

    Zafra-Gómez, Alberto; Garballo, Antonio; Morales, Juan C; García-Ayuso, Luis E

    2006-06-28

    A fast, simple, and reliable method for the isolation and determination of the vitamins thiamin, riboflavin, niacin, pantothenic acid, pyridoxine, folic acid, cyanocobalamin, and ascorbic acid in food samples is proposed. The most relevant advantages of the proposed method are the simultaneous determination of the eight more common vitamins in enriched food products and a reduction of the time required for quantitative extraction, because the method consists merely of the addition of a precipitation solution and centrifugation of the sample. Furthermore, this method saves a substantial amount of reagents as compared with official methods, and minimal sample manipulation is achieved due to the few steps required. The chromatographic separation is carried out on a reverse phase C18 column, and the vitamins are detected at different wavelengths by either fluorescence or UV-visible detection. The proposed method was applied to the determination of water-soluble vitamins in supplemented milk, infant nutrition products, and milk powder certified reference material (CRM 421, BCR) with recoveries ranging from 90 to 100%.

  7. Brain tissues volume measurements from 2D MRI using parametric approach

    NASA Astrophysics Data System (ADS)

    L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.

    2018-04-01

    The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.

  8. MEMS piezoresistive cantilever for the direct measurement of cardiomyocyte contractile force

    NASA Astrophysics Data System (ADS)

    Matsudaira, Kenei; Nguyen, Thanh-Vinh; Hirayama Shoji, Kayoko; Tsukagoshi, Takuya; Takahata, Tomoyuki; Shimoyama, Isao

    2017-10-01

    This paper reports on a method to directly measure the contractile forces of cardiomyocytes using MEMS (micro electro mechanical systems)-based force sensors. The fabricated sensor chip consists of piezoresistive cantilevers that can measure contractile forces with high frequency (several tens of kHz) and high sensing resolution (less than 0.1 nN). Moreover, the proposed method does not require a complex observation system or image processing, which are necessary in conventional optical-based methods. This paper describes the design, fabrication, and evaluation of the proposed device and demonstrates the direct measurements of contractile forces of cardiomyocytes using the fabricated device.

  9. Dual-scale Galerkin methods for Darcy flow

    NASA Astrophysics Data System (ADS)

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  10. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    PubMed Central

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  11. Maximum margin multiple instance clustering with applications to image and text clustering.

    PubMed

    Zhang, Dan; Wang, Fei; Si, Luo; Li, Tao

    2011-05-01

    In multiple instance learning problems, patterns are often given as bags and each bag consists of some instances. Most of existing research in the area focuses on multiple instance classification and multiple instance regression, while very limited work has been conducted for multiple instance clustering (MIC). This paper formulates a novel framework, maximum margin multiple instance clustering (M(3)IC), for MIC. However, it is impractical to directly solve the optimization problem of M(3)IC. Therefore, M(3)IC is relaxed in this paper to enable an efficient optimization solution with a combination of the constrained concave-convex procedure and the cutting plane method. Furthermore, this paper presents some important properties of the proposed method and discusses the relationship between the proposed method and some other related ones. An extensive set of empirical results are shown to demonstrate the advantages of the proposed method against existing research for both effectiveness and efficiency.

  12. A Spacecraft Electrical Characteristics Multi-Label Classification Method Based on Off-Line FCM Clustering and On-Line WPSVM

    PubMed Central

    Li, Ke; Liu, Yi; Wang, Quanxin; Wu, Yalei; Song, Shimin; Sun, Yi; Liu, Tengchong; Wang, Jun; Li, Yang; Du, Shaoyi

    2015-01-01

    This paper proposes a novel multi-label classification method for resolving the spacecraft electrical characteristics problems which involve many unlabeled test data processing, high-dimensional features, long computing time and identification of slow rate. Firstly, both the fuzzy c-means (FCM) offline clustering and the principal component feature extraction algorithms are applied for the feature selection process. Secondly, the approximate weighted proximal support vector machine (WPSVM) online classification algorithms is used to reduce the feature dimension and further improve the rate of recognition for electrical characteristics spacecraft. Finally, the data capture contribution method by using thresholds is proposed to guarantee the validity and consistency of the data selection. The experimental results indicate that the method proposed can obtain better data features of the spacecraft electrical characteristics, improve the accuracy of identification and shorten the computing time effectively. PMID:26544549

  13. Improved segmentation of abnormal cervical nuclei using a graph-search based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Liu, Shaoxiong; Wang, Tianfu; Chen, Siping; Sonka, Milan

    2015-03-01

    Reliable segmentation of abnormal nuclei in cervical cytology is of paramount importance in automation-assisted screening techniques. This paper presents a general method for improving the segmentation of abnormal nuclei using a graph-search based approach. More specifically, the proposed method focuses on the improvement of coarse (initial) segmentation. The improvement relies on a transform that maps round-like border in the Cartesian coordinate system into lines in the polar coordinate system. The costs consisting of nucleus-specific edge and region information are assigned to the nodes. The globally optimal path in the constructed graph is then identified by dynamic programming. We have tested the proposed method on abnormal nuclei from two cervical cell image datasets, Herlev and H and E stained liquid-based cytology (HELBC), and the comparative experiments with recent state-of-the-art approaches demonstrate the superior performance of the proposed method.

  14. A Decision Support System for Evaluating and Selecting Information Systems Projects

    NASA Astrophysics Data System (ADS)

    Deng, Hepu; Wibowo, Santoso

    2009-01-01

    This chapter presents a decision support system (DSS) for effectively solving the information systems (IS) project selection problem. The proposed DSS recognizes the multidimensional nature of the IS project selection problem, the availability of multicriteria analysis (MA) methods, and the preferences of the decision-maker (DM) on the use of specific MA methods in a given situation. A knowledge base consisting of IF-THEN production rules is developed for assisting the DM with a systematic adoption of the most appropriate method with the efficient use of the powerful reasoning and explanation capabilities of intelligent DSS. The idea of letting the problem to be solved determines the method to be used is incorporated into the proposed DSS. As a result, effective decisions can be made for solving the IS project selection problem. An example is presented to demonstrate the applicability of the proposed DSS for solving the problem of selecting IS projects in real world situations.

  15. Equivalent radiation source of 3D package for electromagnetic characteristics analysis

    NASA Astrophysics Data System (ADS)

    Li, Jun; Wei, Xingchang; Shu, Yufei

    2017-10-01

    An equivalent radiation source method is proposed to characterize electromagnetic emission and interference of complex three dimensional integrated circuits (IC) in this paper. The method utilizes amplitude-only near-field scanning data to reconstruct an equivalent magnetic dipole array, and the differential evolution optimization algorithm is proposed to extract the locations, orientation and moments of those dipoles. By importing the equivalent dipoles model into a 3D full-wave simulator together with the victim circuit model, the electromagnetic interference issues in mixed RF/digital systems can be well predicted. A commercial IC is used to validate the accuracy and efficiency of this proposed method. The coupled power at the victim antenna port calculated by the equivalent radiation source is compared with the measured data. Good consistency is obtained which confirms the validity and efficiency of the method. Project supported by the National Nature Science Foundation of China (No. 61274110).

  16. Interference data correction methods for lunar observation with a large-aperture static imaging spectrometer.

    PubMed

    Zhang, Geng; Wang, Shuang; Li, Libo; Hu, Xiuqing; Hu, Bingliang

    2016-11-01

    The lunar spectrum has been used in radiometric calibration and sensor stability monitoring for spaceborne optical sensors. A ground-based large-aperture static image spectrometer (LASIS) can be used to acquire the lunar spectral image for lunar radiance model improvement when the moon orbits over its viewing field. The lunar orbiting behavior is not consistent with the desired scanning speed and direction of LASIS. To correctly extract interferograms from the obtained data, a translation correction method based on image correlation is proposed. This method registers the frames to a reference frame to reduce accumulative errors. Furthermore, we propose a circle-matching-based approach to achieve even higher accuracy during observation of the full moon. To demonstrate the effectiveness of our approaches, experiments are run on true lunar observation data. The results show that the proposed approaches outperform the state-of-the-art methods.

  17. Study on the Algorithm of Judgment Matrix in Analytic Hierarchy Process

    NASA Astrophysics Data System (ADS)

    Lu, Zhiyong; Qin, Futong; Jin, Yican

    2017-10-01

    A new algorithm is proposed for the non-consistent judgment matrix in AHP. A primary judgment matrix is generated firstly through pre-ordering the targeted factor set, and a compared matrix is built through the top integral function. Then a relative error matrix is created by comparing the compared matrix with the primary judgment matrix which is regulated under the control of the relative error matrix and the dissimilar degree of the matrix step by step. Lastly, the targeted judgment matrix is generated to satisfy the requirement of consistence and the least dissimilar degree. The feasibility and validity of the proposed method are verified by simulation results.

  18. A Study on Human Oriented Autonomous Distributed Manufacturing System —Real-time Scheduling Method Based on Preference of Human Operators

    NASA Astrophysics Data System (ADS)

    Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro

    Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.

  19. Multiagent scheduling method with earliness and tardiness objectives in flexible job shops.

    PubMed

    Wu, Zuobao; Weng, Michael X

    2005-04-01

    Flexible job-shop scheduling problems are an important extension of the classical job-shop scheduling problems and present additional complexity. Such problems are mainly due to the existence of a considerable amount of overlapping capacities with modern machines. Classical scheduling methods are generally incapable of addressing such capacity overlapping. We propose a multiagent scheduling method with job earliness and tardiness objectives in a flexible job-shop environment. The earliness and tardiness objectives are consistent with the just-in-time production philosophy which has attracted significant attention in both industry and academic community. A new job-routing and sequencing mechanism is proposed. In this mechanism, two kinds of jobs are defined to distinguish jobs with one operation left from jobs with more than one operation left. Different criteria are proposed to route these two kinds of jobs. Job sequencing enables to hold a job that may be completed too early. Two heuristic algorithms for job sequencing are developed to deal with these two kinds of jobs. The computational experiments show that the proposed multiagent scheduling method significantly outperforms the existing scheduling methods in the literature. In addition, the proposed method is quite fast. In fact, the simulation time to find a complete schedule with over 2000 jobs on ten machines is less than 1.5 min.

  20. A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images.

    PubMed

    Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao

    2018-03-01

    We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.

  1. A propensity score approach to correction for bias due to population stratification using genetic and non-genetic factors.

    PubMed

    Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita

    2009-12-01

    Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.

  2. A highly accurate symmetric optical flow based high-dimensional nonlinear spatial normalization of brain images.

    PubMed

    Wen, Ying; Hou, Lili; He, Lianghua; Peterson, Bradley S; Xu, Dongrong

    2015-05-01

    Spatial normalization plays a key role in voxel-based analyses of brain images. We propose a highly accurate algorithm for high-dimensional spatial normalization of brain images based on the technique of symmetric optical flow. We first construct a three dimension optical model with the consistency assumption of intensity and consistency of the gradient of intensity under a constraint of discontinuity-preserving spatio-temporal smoothness. Then, an efficient inverse consistency optical flow is proposed with aims of higher registration accuracy, where the flow is naturally symmetric. By employing a hierarchical strategy ranging from coarse to fine scales of resolution and a method of Euler-Lagrange numerical analysis, our algorithm is capable of registering brain images data. Experiments using both simulated and real datasets demonstrated that the accuracy of our algorithm is not only better than that of those traditional optical flow algorithms, but also comparable to other registration methods used extensively in the medical imaging community. Moreover, our registration algorithm is fully automated, requiring a very limited number of parameters and no manual intervention. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Automated retina identification based on multiscale elastic registration.

    PubMed

    Figueiredo, Isabel N; Moura, Susana; Neves, Júlio S; Pinto, Luís; Kumar, Sunil; Oliveira, Carlos M; Ramos, João D

    2016-12-01

    In this work we propose a novel method for identifying individuals based on retinal fundus image matching. The method is based on the image registration of retina blood vessels, since it is known that the retina vasculature of an individual is a signature, i.e., a distinctive pattern of the individual. The proposed image registration consists of a multiscale affine registration followed by a multiscale elastic registration. The major advantage of this particular two-step image registration procedure is that it is able to account for both rigid and non-rigid deformations either inherent to the retina tissues or as a result of the imaging process itself. Afterwards a decision identification measure, relying on a suitable normalized function, is defined to decide whether or not the pair of images belongs to the same individual. The method is tested on a data set of 21721 real pairs generated from a total of 946 retinal fundus images of 339 different individuals, consisting of patients followed in the context of different retinal diseases and also healthy patients. The evaluation of its performance reveals that it achieves a very low false rejection rate (FRR) at zero FAR (the false acceptance rate), equal to 0.084, as well as a low equal error rate (EER), equal to 0.053. Moreover, the tests performed by using only the multiscale affine registration, and discarding the multiscale elastic registration, clearly show the advantage of the proposed approach. The outcome of this study also indicates that the proposed method is reliable and competitive with other existing retinal identification methods, and forecasts its future appropriateness and applicability in real-life applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. ALGORITHM FOR TREATMENT OF PATIENTS WITH MESIAL OCCLUSION USING PROPRIETARY ORTHODONTIC DEVICE.

    PubMed

    Flis, P; Filonenko, V; Doroshenko, N

    2017-10-01

    Early elimination of dentoalveolar apparatus orthodontic disorders is the dominant concept in the treatment technique. to present a sagittal anomalies treatment algorithm, of Class III particularly, in the transitional bite period by the proposed design of individual orthodontic devices with a movable ramp. The treatment algorithm consisted of several blocks: the motivation, the etiological factors establishment, the plan and treatment tactics creation basing on careful diagnosis, the stages of the active period of treatment and the patient management in the retention period. Anthropometric measurements of the maxilla and mandible models were performed to determine the degree of dental arches development. The length of the dental arches was determined on the models by the Nance method in combination with the Huckaba method and the sagital dimensions by the Mirgasizov's method. The leading role in the patients examination was taken by lateral cephalograms analyzing using the Sassouni Plus method. The proposed construction of an orthodontic appliance consists of a plastic base, a vestibular arc, retaining clasps and a ramp, which is connected with a base with two torsion springs. To demonstrate the effectiveness of the proposed construction, an example of the patient Y. treatment of at the age of 6 years 9 months is presented. After the treatment, positive morphological, functional and aesthetic changes were established. The usage of proposed orthodontic appliance with movable ramp allows to start orthodontic treatment at early age, increases its effectiveness and reduce the number of complications. The expediency of stage-by-stage treatment is approved by a positive results of such method. To achieve stable results, it is important to individualize their prognosis even at the planning stage of orthodontic treatment.

  5. Feature Screening for Ultrahigh Dimensional Categorical Data with Applications.

    PubMed

    Huang, Danyang; Li, Runze; Wang, Hansheng

    2014-01-01

    Ultrahigh dimensional data with both categorical responses and categorical covariates are frequently encountered in the analysis of big data, for which feature screening has become an indispensable statistical tool. We propose a Pearson chi-square based feature screening procedure for categorical response with ultrahigh dimensional categorical covariates. The proposed procedure can be directly applied for detection of important interaction effects. We further show that the proposed procedure possesses screening consistency property in the terminology of Fan and Lv (2008). We investigate the finite sample performance of the proposed procedure by Monte Carlo simulation studies, and illustrate the proposed method by two empirical datasets.

  6. Prediction of global ionospheric VTEC maps using an adaptive autoregressive model

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Xin, Shaoming; Liu, Xiaolu; Shi, Chuang; Fan, Lei

    2018-02-01

    In this contribution, an adaptive autoregressive model is proposed and developed to predict global ionospheric vertical total electron content maps (VTEC). Specifically, the spherical harmonic (SH) coefficients are predicted based on the autoregressive model, and the order of the autoregressive model is determined adaptively using the F-test method. To test our method, final CODE and IGS global ionospheric map (GIM) products, as well as altimeter TEC data during low and mid-to-high solar activity period collected by JASON, are used to evaluate the precision of our forecasting products. Results indicate that the predicted products derived from the model proposed in this paper have good consistency with the final GIMs in low solar activity, where the annual mean of the root-mean-square value is approximately 1.5 TECU. However, the performance of predicted vertical TEC in periods of mid-to-high solar activity has less accuracy than that during low solar activity periods, especially in the equatorial ionization anomaly region and the Southern Hemisphere. Additionally, in comparison with forecasting products, the final IGS GIMs have the best consistency with altimeter TEC data. Future work is needed to investigate the performance of forecasting products using the proposed method in an operational environment, rather than using the SH coefficients from the final CODE products, to understand the real-time applicability of the method.

  7. Incorporating spatial constraint in co-activation pattern analysis to explore the dynamics of resting-state networks: An application to Parkinson's disease.

    PubMed

    Zhuang, Xiaowei; Walsh, Ryan R; Sreenivasan, Karthik; Yang, Zhengshi; Mishra, Virendra; Cordes, Dietmar

    2018-05-15

    The dynamics of the brain's intrinsic networks have been recently studied using co-activation pattern (CAP) analysis. The CAP method relies on few model assumptions and CAP-based measurements provide quantitative information of network temporal dynamics. One limitation of existing CAP-related methods is that the computed CAPs share considerable spatial overlap that may or may not be functionally distinct relative to specific network dynamics. To more accurately describe network dynamics with spatially distinct CAPs, and to compare network dynamics between different populations, a novel data-driven CAP group analysis method is proposed in this study. In the proposed method, a dominant-CAP (d-CAP) set is synthesized across CAPs from multiple clustering runs for each group with the constraint of low spatial similarities among d-CAPs. Alternating d-CAPs with less overlapping spatial patterns can better capture overall network dynamics. The number of d-CAPs, the temporal fraction and spatial consistency of each d-CAP, and the subject-specific switching probability among all d-CAPs are then calculated for each group and used to compare network dynamics between groups. The spatial dissimilarities among d-CAPs computed with the proposed method were first demonstrated using simulated data. High consistency between simulated ground-truth and computed d-CAPs was achieved, and detailed comparisons between the proposed method and existing CAP-based methods were conducted using simulated data. In an effort to physiologically validate the proposed technique and investigate network dynamics in a relevant brain network disorder, the proposed method was then applied to data from the Parkinson's Progression Markers Initiative (PPMI) database to compare the network dynamics in Parkinson's disease (PD) and normal control (NC) groups. Fewer d-CAPs, skewed distribution of temporal fractions of d-CAPs, and reduced switching probabilities among final d-CAPs were found in most networks in the PD group, as compared to the NC group. Furthermore, an overall negative association between switching probability among d-CAPs and disease severity was observed in most networks in the PD group as well. These results expand upon previous findings from in vivo electrophysiological recording studies in PD. Importantly, this novel analysis also demonstrates that changes in network dynamics can be measured using resting-state fMRI data from subjects with early stage PD. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Condition monitoring and fault diagnosis of motor bearings using undersampled vibration signals from a wireless sensor network

    NASA Astrophysics Data System (ADS)

    Lu, Siliang; Zhou, Peng; Wang, Xiaoxian; Liu, Yongbin; Liu, Fang; Zhao, Jiwen

    2018-02-01

    Wireless sensor networks (WSNs) which consist of miscellaneous sensors are used frequently in monitoring vital equipment. Benefiting from the development of data mining technologies, the massive data generated by sensors facilitate condition monitoring and fault diagnosis. However, too much data increase storage space, energy consumption, and computing resource, which can be considered fatal weaknesses for a WSN with limited resources. This study investigates a new method for motor bearings condition monitoring and fault diagnosis using the undersampled vibration signals acquired from a WSN. The proposed method, which is a fusion of the kurtogram, analog domain bandpass filtering, bandpass sampling, and demodulated resonance technique, can reduce the sampled data length while retaining the monitoring and diagnosis performance. A WSN prototype was designed, and simulations and experiments were conducted to evaluate the effectiveness and efficiency of the proposed method. Experimental results indicated that the sampled data length and transmission time of the proposed method result in a decrease of over 80% in comparison with that of the traditional method. Therefore, the proposed method indicates potential applications on condition monitoring and fault diagnosis of motor bearings installed in remote areas, such as wind farms and offshore platforms.

  9. A sensor network based virtual beam-like structure method for fault diagnosis and monitoring of complex structures with Improved Bacterial Optimization

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-02-01

    This paper proposes a novel method for the fault diagnosis of complex structures based on an optimized virtual beam-like structure approach. A complex structure can be regarded as a combination of numerous virtual beam-like structures considering the vibration transmission path from vibration sources to each sensor. The structural 'virtual beam' consists of a sensor chain automatically obtained by an Improved Bacterial Optimization Algorithm (IBOA). The biologically inspired optimization method (i.e. IBOA) is proposed for solving the discrete optimization problem associated with the selection of the optimal virtual beam for fault diagnosis. This novel virtual beam-like-structure approach needs less or little prior knowledge. Neither does it require stationary response data, nor is it confined to a specific structure design. It is easy to implement within a sensor network attached to the monitored structure. The proposed fault diagnosis method has been tested on the detection of loosening screws located at varying positions in a real satellite-like model. Compared with empirical methods, the proposed virtual beam-like structure method has proved to be very effective and more reliable for fault localization.

  10. Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner.

    PubMed

    An, Jhonghyun; Choi, Baehoon; Sim, Kwee-Bo; Kim, Euntai

    2016-07-20

    There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation.

  11. Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner

    PubMed Central

    An, Jhonghyun; Choi, Baehoon; Sim, Kwee-Bo; Kim, Euntai

    2016-01-01

    There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation. PMID:27447640

  12. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  13. Single shot multi-wavelength phase retrieval with coherent modulation imaging.

    PubMed

    Dong, Xue; Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang

    2018-04-15

    A single shot multi-wavelength phase retrieval method is proposed by combining common coherent modulation imaging (CMI) and a low rank mixed-state algorithm together. A radiation beam consisting of multi-wavelength is illuminated on the sample to be observed, and the exiting field is incident on a random phase plate to form speckle patterns, which is the incoherent superposition of diffraction patterns of each wavelength. The exiting complex amplitude of the sample including both the modulus and phase of each wavelength can be reconstructed simultaneously from the recorded diffraction intensity using a low rank mixed-state algorithm. The feasibility of this proposed method was verified with visible light experimentally. This proposed method not only makes CMI realizable with partially coherent illumination but also can extend its application to various traditionally unrelated fields, where several wavelengths should be considered simultaneously.

  14. Modeling and Detecting Feature Interactions among Integrated Services of Home Network Systems

    NASA Astrophysics Data System (ADS)

    Igaki, Hiroshi; Nakamura, Masahide

    This paper presents a framework for formalizing and detecting feature interactions (FIs) in the emerging smart home domain. We first establish a model of home network system (HNS), where every networked appliance (or the HNS environment) is characterized as an object consisting of properties and methods. Then, every HNS service is defined as a sequence of method invocations of the appliances. Within the model, we next formalize two kinds of FIs: (a) appliance interactions and (b) environment interactions. An appliance interaction occurs when two method invocations conflict on the same appliance, whereas an environment interaction arises when two method invocations conflict indirectly via the environment. Finally, we propose offline and online methods that detect FIs before service deployment and during execution, respectively. Through a case study with seven practical services, it is shown that the proposed framework is generic enough to capture feature interactions in HNS integrated services. We also discuss several FI resolution schemes within the proposed framework.

  15. Free-form surface design method for a collimator TIR lens.

    PubMed

    Tsai, Chung-Yu

    2016-04-01

    A free-form (FF) surface design method is proposed for a general axial-symmetrical collimator system consisting of a light source and a total internal reflection lens with two coupled FF boundary surfaces. The profiles of the boundary surfaces are designed using a FF surface construction method such that each incident ray is directed (refracted and reflected) in such a way as to form a specified image pattern on the target plane. The light ray paths within the system are analyzed using an exact analytical model and a skew-ray tracing approach. In addition, the validity of the proposed FF design method is demonstrated by means of ZEMAX simulations. It is shown that the illumination distribution formed on the target plane is in good agreement with that specified by the user. The proposed surface construction method is mathematically straightforward and easily implemented in computer code. As such, it provides a useful tool for the design and analysis of general axial-symmetrical optical systems.

  16. Infrared target tracking via weighted correlation filter

    NASA Astrophysics Data System (ADS)

    He, Yu-Jie; Li, Min; Zhang, JinLi; Yao, Jun-Ping

    2015-11-01

    Design of an effective target tracker is an important and challenging task for many applications due to multiple factors which can cause disturbance in infrared video sequences. In this paper, an infrared target tracking method under tracking by detection framework based on a weighted correlation filter is presented. This method consists of two parts: detection and filtering. For the detection stage, we propose a sequential detection method for the infrared target based on low-rank representation. For the filtering stage, a new multi-feature weighted function which fuses different target features is proposed, which takes the importance of the different regions into consideration. The weighted function is then incorporated into a correlation filter to compute a confidence map more accurately, in order to indicate the best target location based on the detection results obtained from the first stage. Extensive experimental results on different video sequences demonstrate that the proposed method performs favorably for detection and tracking compared with baseline methods in terms of efficiency and accuracy.

  17. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm.

    PubMed

    Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C

    2005-10-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  18. A method of vehicle license plate recognition based on PCANet and compressive sensing

    NASA Astrophysics Data System (ADS)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  19. A new algorithm for stand table projection models.

    Treesearch

    Quang V. Cao; V. Clark Baldwin

    1999-01-01

    The constrained least squares method is proposed as an algorithm for projecting stand tables through time. This method consists of three steps: (1) predict survival in each diameter class, (2) predict diameter growth, and (3) use the least squares approach to adjust the stand table to satisfy the constraints of future survival, average diameter, and stand basal area....

  20. Comparison of Classification Methods for Detecting Emotion from Mandarin Speech

    NASA Astrophysics Data System (ADS)

    Pao, Tsang-Long; Chen, Yu-Te; Yeh, Jun-Heng

    It is said that technology comes out from humanity. What is humanity? The very definition of humanity is emotion. Emotion is the basis for all human expression and the underlying theme behind everything that is done, said, thought or imagined. Making computers being able to perceive and respond to human emotion, the human-computer interaction will be more natural. Several classifiers are adopted for automatically assigning an emotion category, such as anger, happiness or sadness, to a speech utterance. These classifiers were designed independently and tested on various emotional speech corpora, making it difficult to compare and evaluate their performance. In this paper, we first compared several popular classification methods and evaluated their performance by applying them to a Mandarin speech corpus consisting of five basic emotions, including anger, happiness, boredom, sadness and neutral. The extracted feature streams contain MFCC, LPCC, and LPC. The experimental results show that the proposed WD-MKNN classifier achieves an accuracy of 81.4% for the 5-class emotion recognition and outperforms other classification techniques, including KNN, MKNN, DW-KNN, LDA, QDA, GMM, HMM, SVM, and BPNN. Then, to verify the advantage of the proposed method, we compared these classifiers by applying them to another Mandarin expressive speech corpus consisting of two emotions. The experimental results still show that the proposed WD-MKNN outperforms others.

  1. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  2. Color preservation for tone reproduction and image enhancement

    NASA Astrophysics Data System (ADS)

    Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh

    2014-01-01

    Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.

  3. Consistent estimate of ocean warming, land ice melt and sea level rise from Observations

    NASA Astrophysics Data System (ADS)

    Blazquez, Alejandro; Meyssignac, Benoît; Lemoine, Jean Michel

    2016-04-01

    Based on the sea level budget closure approach, this study investigates the consistency of observed Global Mean Sea Level (GMSL) estimates from satellite altimetry, observed Ocean Thermal Expansion (OTE) estimates from in-situ hydrographic data (based on Argo for depth above 2000m and oceanic cruises below) and GRACE observations of land water storage and land ice melt for the period January 2004 to December 2014. The consistency between these datasets is a key issue if we want to constrain missing contributions to sea level rise such as the deep ocean contribution. Numerous previous studies have addressed this question by summing up the different contributions to sea level rise and comparing it to satellite altimetry observations (see for example Llovel et al. 2015, Dieng et al. 2015). Here we propose a novel approach which consists in correcting GRACE solutions over the ocean (essentially corrections of stripes and leakage from ice caps) with mass observations deduced from the difference between satellite altimetry GMSL and in-situ hydrographic data OTE estimates. We check that the resulting GRACE corrected solutions are consistent with original GRACE estimates of the geoid spherical harmonic coefficients within error bars and we compare the resulting GRACE estimates of land water storage and land ice melt with independent results from the literature. This method provides a new mass redistribution from GRACE consistent with observations from Altimetry and OTE. We test the sensibility of this method to the deep ocean contribution and the GIA models and propose best estimates.

  4. A simplified focusing and astigmatism correction method for a scanning electron microscope

    NASA Astrophysics Data System (ADS)

    Lu, Yihua; Zhang, Xianmin; Li, Hai

    2018-01-01

    Defocus and astigmatism can lead to blurred images and poor resolution. This paper presents a simplified method for focusing and astigmatism correction of a scanning electron microscope (SEM). The method consists of two steps. In the first step, the fast Fourier transform (FFT) of the SEM image is performed and the FFT is subsequently processed with a threshold to achieve a suitable result. In the second step, the threshold FFT is used for ellipse fitting to determine the presence of defocus and astigmatism. The proposed method clearly provides the relationships between the defocus, the astigmatism and the direction of stretching of the FFT, and it can determine the astigmatism in a single image. Experimental studies are conducted to demonstrate the validity of the proposed method.

  5. Gear fatigue crack prognosis using embedded model, gear dynamic model and fracture mechanics

    NASA Astrophysics Data System (ADS)

    Li, C. James; Lee, Hyungdae

    2005-07-01

    This paper presents a model-based method that predicts remaining useful life of a gear with a fatigue crack. The method consists of an embedded model to identify gear meshing stiffness from measured gear torsional vibration, an inverse method to estimate crack size from the estimated meshing stiffness; a gear dynamic model to simulate gear meshing dynamics and determine the dynamic load on the cracked tooth; and a fast crack propagation model to forecast the remaining useful life based on the estimated crack size and dynamic load. The fast crack propagation model was established to avoid repeated calculations of FEM and facilitate field deployment of the proposed method. Experimental studies were conducted to validate and demonstrate the feasibility of the proposed method for prognosis of a cracked gear.

  6. Nonrigid registration of 3D longitudinal optical coherence tomography volumes with choroidal neovascularization

    NASA Astrophysics Data System (ADS)

    Wei, Qiangding; Shi, Fei; Zhu, Weifang; Xiang, Dehui; Chen, Haoyu; Chen, Xinjian

    2017-02-01

    In this paper, we propose a 3D registration method for retinal optical coherence tomography (OCT) volumes. The proposed method consists of five main steps: First, a projection image of the 3D OCT scan is created. Second, the vessel enhancement filter is applied on the projection image to detect vessel shadow. Third, landmark points are extracted based on both vessel positions and layer information. Fourth, the coherent point drift method is used to align retinal OCT volumes. Finally, a nonrigid B-spline-based registration method is applied to find the optimal transform to match the data. We applied this registration method on 15 3D OCT scans of patients with Choroidal Neovascularization (CNV). The Dice coefficients (DSC) between layers are greatly improved after applying the nonrigid registration.

  7. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    PubMed Central

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273

  8. Benchmark data sets for structure-based computational target prediction.

    PubMed

    Schomburg, Karen T; Rarey, Matthias

    2014-08-25

    Structure-based computational target prediction methods identify potential targets for a bioactive compound. Methods based on protein-ligand docking so far face many challenges, where the greatest probably is the ranking of true targets in a large data set of protein structures. Currently, no standard data sets for evaluation exist, rendering comparison and demonstration of improvements of methods cumbersome. Therefore, we propose two data sets and evaluation strategies for a meaningful evaluation of new target prediction methods, i.e., a small data set consisting of three target classes for detailed proof-of-concept and selectivity studies and a large data set consisting of 7992 protein structures and 72 drug-like ligands allowing statistical evaluation with performance metrics on a drug-like chemical space. Both data sets are built from openly available resources, and any information needed to perform the described experiments is reported. We describe the composition of the data sets, the setup of screening experiments, and the evaluation strategy. Performance metrics capable to measure the early recognition of enrichments like AUC, BEDROC, and NSLR are proposed. We apply a sequence-based target prediction method to the large data set to analyze its content of nontrivial evaluation cases. The proposed data sets are used for method evaluation of our new inverse screening method iRAISE. The small data set reveals the method's capability and limitations to selectively distinguish between rather similar protein structures. The large data set simulates real target identification scenarios. iRAISE achieves in 55% excellent or good enrichment a median AUC of 0.67 and RMSDs below 2.0 Å for 74% and was able to predict the first true target in 59 out of 72 cases in the top 2% of the protein data set of about 8000 structures.

  9. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach

    PubMed Central

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-01-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505

  10. Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.

    PubMed

    Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei

    2015-08-01

    Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.

  11. Smart house-based optimal operation of thermal unit commitment for a smart grid considering transmission constraints

    NASA Astrophysics Data System (ADS)

    Howlader, Harun Or Rashid; Matayoshi, Hidehito; Noorzad, Ahmad Samim; Muarapaz, Cirio Celestino; Senjyu, Tomonobu

    2018-05-01

    This paper presents a smart house-based power system for thermal unit commitment programme. The proposed power system consists of smart houses, renewable energy plants and conventional thermal units. The transmission constraints are considered for the proposed system. The generated power of the large capacity renewable energy plant leads to the violated transmission constraints in the thermal unit commitment programme, therefore, the transmission constraint should be considered. This paper focuses on the optimal operation of the thermal units incorporated with controllable loads such as Electrical Vehicle and Heat Pump water heater of the smart houses. The proposed method is compared with the power flow in thermal units operation without controllable loads and the optimal operation without the transmission constraints. Simulation results show the validation of the proposed method.

  12. Beam-steering efficiency optimization method based on a rapid-search algorithm for liquid crystal optical phased array.

    PubMed

    Xiao, Feng; Kong, Lingjiang; Chen, Jian

    2017-06-01

    A rapid-search algorithm to improve the beam-steering efficiency for a liquid crystal optical phased array was proposed and experimentally demonstrated in this paper. This proposed algorithm, in which the value of steering efficiency is taken as the objective function and the controlling voltage codes are considered as the optimization variables, consisted of a detection stage and a construction stage. It optimized the steering efficiency in the detection stage and adjusted its search direction adaptively in the construction stage to avoid getting caught in a wrong search space. Simulations had been conducted to compare the proposed algorithm with the widely used pattern-search algorithm using criteria of convergence rate and optimized efficiency. Beam-steering optimization experiments had been performed to verify the validity of the proposed method.

  13. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less

  14. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  15. YoTube: Searching Action Proposal Via Recurrent and Static Regression Networks

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyuan; Vial, Romain; Lu, Shijian; Peng, Xi; Fu, Huazhu; Tian, Yonghong; Cao, Xianbin

    2018-06-01

    In this paper, we present YoTube-a novel network fusion framework for searching action proposals in untrimmed videos, where each action proposal corresponds to a spatialtemporal video tube that potentially locates one human action. Our method consists of a recurrent YoTube detector and a static YoTube detector, where the recurrent YoTube explores the regression capability of RNN for candidate bounding boxes predictions using learnt temporal dynamics and the static YoTube produces the bounding boxes using rich appearance cues in a single frame. Both networks are trained using rgb and optical flow in order to fully exploit the rich appearance, motion and temporal context, and their outputs are fused to produce accurate and robust proposal boxes. Action proposals are finally constructed by linking these boxes using dynamic programming with a novel trimming method to handle the untrimmed video effectively and efficiently. Extensive experiments on the challenging UCF-101 and UCF-Sports datasets show that our proposed technique obtains superior performance compared with the state-of-the-art.

  16. Comprehensive and Practical Vision System for Self-Driving Vehicle Lane-Level Localization.

    PubMed

    Du, Xinxin; Tan, Kok Kiong

    2016-05-01

    Vehicle lane-level localization is a fundamental technology in autonomous driving. To achieve accurate and consistent performance, a common approach is to use the LIDAR technology. However, it is expensive and computational demanding, and thus not a practical solution in many situations. This paper proposes a stereovision system, which is of low cost, yet also able to achieve high accuracy and consistency. It integrates a new lane line detection algorithm with other lane marking detectors to effectively identify the correct lane line markings. It also fits multiple road models to improve accuracy. An effective stereo 3D reconstruction method is proposed to estimate vehicle localization. The estimation consistency is further guaranteed by a new particle filter framework, which takes vehicle dynamics into account. Experiment results based on image sequences taken under different visual conditions showed that the proposed system can identify the lane line markings with 98.6% accuracy. The maximum estimation error of the vehicle distance to lane lines is 16 cm in daytime and 26 cm at night, and the maximum estimation error of its moving direction with respect to the road tangent is 0.06 rad in daytime and 0.12 rad at night. Due to its high accuracy and consistency, the proposed system can be implemented in autonomous driving vehicles as a practical solution to vehicle lane-level localization.

  17. SU-E-J-89: Deformable Registration Method Using B-TPS in Radiotherapy.

    PubMed

    Xie, Y

    2012-06-01

    A novel deformable registration method for four-dimensional computed tomography (4DCT) images is developed in radiation therapy. The proposed method combines the thin plate spline (TPS) and B-spline together to achieve high accuracy and high efficiency. The method consists of two steps. First, TPS is used as a global registration method to deform large unfit regions in the moving image to match counterpart in the reference image. Then B-spline is used for local registration, the previous deformed moving image is further deformed to match the reference image more accurately. Two clinical CT image sets, including one pair of lung and one pair of liver, are simulated using the proposed algorithm, which results in a tremendous improvement in both run-time and registration quality, compared with the conventional methods solely using either TPS or B-spline. The proposed method can combine the efficiency of TPS and the accuracy of B-spline, performing good adaptively and robust in registration of clinical 4DCT image. © 2012 American Association of Physicists in Medicine.

  18. HEp-2 cell image classification method based on very deep convolutional networks with small datasets

    NASA Astrophysics Data System (ADS)

    Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping

    2017-07-01

    Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.

  19. A GIHS-based spectral preservation fusion method for remote sensing images using edge restored spectral modulation

    NASA Astrophysics Data System (ADS)

    Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen

    2014-02-01

    High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.

  20. Obstacle Avoidance for Quadcopter using Ultrasonic Sensor

    NASA Astrophysics Data System (ADS)

    Fazlur Rahman, Muhammad; Adhy Sasongko, Rianto

    2018-04-01

    An obstacle avoidance system is being proposed. The system will combine available flight controller with a proposed avoidance method as a proof of concept. Quadcopter as a UAV is integrated with the system which consist of several modes in order to do avoidance. As the previous study, obstacle will be determined using ultrasonic sensor and servo. As result, the quadcopter will move according to its mode and successfully avoid obstacle.

  1. A Gaussian Processes Technique for Short-term Load Forecasting with Considerations of Uncertainty

    NASA Astrophysics Data System (ADS)

    Ohmi, Masataro; Mori, Hiroyuki

    In this paper, an efficient method is proposed to deal with short-term load forecasting with the Gaussian Processes. Short-term load forecasting plays a key role to smooth power system operation such as economic load dispatching, unit commitment, etc. Recently, the deregulated and competitive power market increases the degree of uncertainty. As a result, it is more important to obtain better prediction results to save the cost. One of the most important aspects is that power system operator needs the upper and lower bounds of the predicted load to deal with the uncertainty while they require more accurate predicted values. The proposed method is based on the Bayes model in which output is expressed in a distribution rather than a point. To realize the model efficiently, this paper proposes the Gaussian Processes that consists of the Bayes linear model and kernel machine to obtain the distribution of the predicted value. The proposed method is successively applied to real data of daily maximum load forecasting.

  2. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  3. Online Denoising Based on the Second-Order Adaptive Statistics Model.

    PubMed

    Yi, Sheng-Lun; Jin, Xue-Bo; Su, Ting-Li; Tang, Zhen-Yun; Wang, Fa-Fa; Xiang, Na; Kong, Jian-Lei

    2017-07-20

    Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule-Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.

  4. Palpation simulator with stable haptic feedback.

    PubMed

    Kim, Sang-Youn; Ryu, Jee-Hwan; Lee, WooJeong

    2015-01-01

    The main difficulty in constructing palpation simulators is to compute and to generate stable and realistic haptic feedback without vibration. When a user haptically interacts with highly non-homogeneous soft tissues through a palpation simulator, a sudden change of stiffness in target tissues causes unstable interaction with the object. We propose a model consisting of a virtual adjustable damper and an energy measuring element. The energy measuring element gauges energy which is stored in a palpation simulator and the virtual adjustable damper dissipates the energy to achieve stable haptic interaction. To investigate the haptic behavior of the proposed method, impulse and continuous inputs are provided to target tissues. If a haptic interface point meets with the hardest portion in the target tissues modeled with a conventional method, we observe unstable motion and feedback force. However, when the target tissues are modeled with the proposed method, a palpation simulator provides stable interaction without vibration. The proposed method overcomes a problem in conventional haptic palpation simulators where unstable force or vibration can be generated if there is a big discrepancy in material property between an element and its neighboring elements in target tissues.

  5. An accurate reactive power control study in virtual flux droop control

    NASA Astrophysics Data System (ADS)

    Wang, Aimeng; Zhang, Jia

    2017-12-01

    This paper investigates the problem of reactive power sharing based on virtual flux droop method. Firstly, flux droop control method is derived, where complicated multiple feedback loops and parameter regulation are avoided. Then, the reasons for inaccurate reactive power sharing are theoretically analyzed. Further, a novel reactive power control scheme is proposed which consists of three parts: compensation control, voltage recovery control and flux droop control. Finally, the proposed reactive power control strategy is verified in a simplified microgrid model with two parallel DGs. The simulation results show that the proposed control scheme can achieve accurate reactive power sharing and zero deviation of voltage. Meanwhile, it has some advantages of simple control and excellent dynamic and static performance.

  6. Finger-Vein Verification Based on Multi-Features Fusion

    PubMed Central

    Qin, Huafeng; Qin, Lan; Xue, Lian; He, Xiping; Yu, Chengbo; Liang, Xinyuan

    2013-01-01

    This paper presents a new scheme to improve the performance of finger-vein identification systems. Firstly, a vein pattern extraction method to extract the finger-vein shape and orientation features is proposed. Secondly, to accommodate the potential local and global variations at the same time, a region-based matching scheme is investigated by employing the Scale Invariant Feature Transform (SIFT) matching method. Finally, the finger-vein shape, orientation and SIFT features are combined to further enhance the performance. The experimental results on databases of 426 and 170 fingers demonstrate the consistent superiority of the proposed approach. PMID:24196433

  7. A Method for Extracting Important Segments from Documents Using Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Suzuki, Daisuke; Utsumi, Akira

    In this paper we propose an extraction-based method for automatic summarization. The proposed method consists of two processes: important segment extraction and sentence compaction. The process of important segment extraction classifies each segment in a document as important or not by Support Vector Machines (SVMs). The process of sentence compaction then determines grammatically appropriate portions of a sentence for a summary according to its dependency structure and the classification result by SVMs. To test the performance of our method, we conducted an evaluation experiment using the Text Summarization Challenge (TSC-1) corpus of human-prepared summaries. The result was that our method achieved better performance than a segment-extraction-only method and the Lead method, especially for sentences only a part of which was included in human summaries. Further analysis of the experimental results suggests that a hybrid method that integrates sentence extraction with segment extraction may generate better summaries.

  8. Automatic liver tumor segmentation on computed tomography for patient treatment planning and monitoring

    PubMed Central

    Moghbel, Mehrdad; Mashohor, Syamsiah; Mahmud, Rozi; Saripan, M. Iqbal Bin

    2016-01-01

    Segmentation of liver tumors from Computed Tomography (CT) and tumor burden analysis play an important role in the choice of therapeutic strategies for liver diseases and treatment monitoring. In this paper, a new segmentation method for liver tumors from contrast-enhanced CT imaging is proposed. As manual segmentation of tumors for liver treatment planning is both labor intensive and time-consuming, a highly accurate automatic tumor segmentation is desired. The proposed framework is fully automatic requiring no user interaction. The proposed segmentation evaluated on real-world clinical data from patients is based on a hybrid method integrating cuckoo optimization and fuzzy c-means algorithm with random walkers algorithm. The accuracy of the proposed method was validated using a clinical liver dataset containing one of the highest numbers of tumors utilized for liver tumor segmentation containing 127 tumors in total with further validation of the results by a consultant radiologist. The proposed method was able to achieve one of the highest accuracies reported in the literature for liver tumor segmentation compared to other segmentation methods with a mean overlap error of 22.78 % and dice similarity coefficient of 0.75 in 3Dircadb dataset and a mean overlap error of 15.61 % and dice similarity coefficient of 0.81 in MIDAS dataset. The proposed method was able to outperform most other tumor segmentation methods reported in the literature while representing an overlap error improvement of 6 % compared to one of the best performing automatic methods in the literature. The proposed framework was able to provide consistently accurate results considering the number of tumors and the variations in tumor contrast enhancements and tumor appearances while the tumor burden was estimated with a mean error of 0.84 % in 3Dircadb dataset. PMID:27540353

  9. Fuzzy mobile-robot positioning in intelligent spaces using wireless sensor networks.

    PubMed

    Herrero, David; Martínez, Humberto

    2011-01-01

    This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using wireless sensor networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods.

  10. Ant colony algorithm for clustering in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Subekti, R.; Sari, E. R.; Kusumawati, R.

    2018-03-01

    This research aims to describe portfolio optimization using clustering methods with ant colony approach. Two stock portfolios of LQ45 Indonesia is proposed based on the cluster results obtained from ant colony optimization (ACO). The first portfolio consists of assets with ant colony displacement opportunities beyond the defined probability limits of the researcher, where the weight of each asset is determined by mean-variance method. The second portfolio consists of two assets with the assumption that each asset is a cluster formed from ACO. The first portfolio has a better performance compared to the second portfolio seen from the Sharpe index.

  11. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-08-13

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.

  12. New spatial diversity equalizer based on PLL

    NASA Astrophysics Data System (ADS)

    Rao, Wei

    2011-10-01

    A new Spatial Diversity Equalizer (SDE) based on phase-locked loop (PLL) is proposed to overcome the inter-symbol interference (ISI) and phase rotations simultaneously in the digital communication system. The proposed SDE consists of equal gain combining technique based on a famous blind equalization algorithm constant modulus algorithm (CMA) and a PLL. Compared with conventional SDE, the proposed SDE has not only faster convergence rate and lower residual error but also the ability to recover carrier phase rotation. The efficiency of the method is proved by computer simulation.

  13. An Adaptive Niching Genetic Algorithm using a niche size equalization mechanism

    NASA Astrophysics Data System (ADS)

    Nagata, Yuichi

    Niching GAs have been widely investigated to apply genetic algorithms (GAs) to multimodal function optimization problems. In this paper, we suggest a new niching GA that attempts to form niches, each consisting of an equal number of individuals. The proposed GA can be applied also to combinatorial optimization problems by defining a distance metric in the search space. We apply the proposed GA to the job-shop scheduling problem (JSP) and demonstrate that the proposed niching method enhances the ability to maintain niches and improve the performance of GAs.

  14. On Consistency Test Method of Expert Opinion in Ecological Security Assessment

    PubMed Central

    Wang, Lihong

    2017-01-01

    To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert’s individual judgment level, ability and the consistency of the expert’s overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment. PMID:28869570

  15. On Consistency Test Method of Expert Opinion in Ecological Security Assessment.

    PubMed

    Gong, Zaiwu; Wang, Lihong

    2017-09-04

    To reflect the initiative design and initiative of human security management and safety warning, ecological safety assessment is of great value. In the comprehensive evaluation of regional ecological security with the participation of experts, the expert's individual judgment level, ability and the consistency of the expert's overall opinion will have a very important influence on the evaluation result. This paper studies the consistency measure and consensus measure based on the multiplicative and additive consistency property of fuzzy preference relation (FPR). We firstly propose the optimization methods to obtain the optimal multiplicative consistent and additively consistent FPRs of individual and group judgments, respectively. Then, we put forward a consistency measure by computing the distance between the original individual judgment and the optimal individual estimation, along with a consensus measure by computing the distance between the original collective judgment and the optimal collective estimation. In the end, we make a case study on ecological security for five cities. Result shows that the optimal FPRs are helpful in measuring the consistency degree of individual judgment and the consensus degree of collective judgment.

  16. Hesitant Fuzzy Linguistic Preference Utility Set and Its Application in Selection of Fire Rescue Plans

    PubMed Central

    Si, Guangsen; Xu, Zeshui

    2018-01-01

    Hesitant fuzzy linguistic term set provides an effective tool to represent uncertain decision information. However, the semantics corresponding to the linguistic terms in it cannot accurately reflect the decision-makers’ subjective cognition. In general, different decision-makers’ sensitivities towards the semantics are different. Such sensitivities can be represented by the cumulative prospect theory value function. Inspired by this, we propose a linguistic scale function to transform the semantics corresponding to linguistic terms into the linguistic preference values. Furthermore, we propose the hesitant fuzzy linguistic preference utility set, based on which, the decision-makers can flexibly express their distinct semantics and obtain the decision results that are consistent with their cognition. For calculations and comparisons over the hesitant fuzzy linguistic preference utility sets, we introduce some distance measures and comparison laws. Afterwards, to apply the hesitant fuzzy linguistic preference utility sets in emergency management, we develop a method to obtain objective weights of attributes and then propose a hesitant fuzzy linguistic preference utility-TOPSIS method to select the best fire rescue plan. Finally, the validity of the proposed method is verified by some comparisons of the method with other two representative methods including the hesitant fuzzy linguistic-TOPSIS method and the hesitant fuzzy linguistic-VIKOR method. PMID:29614019

  17. Comparative study of methods for recognition of an unknown person's action from a video sequence

    NASA Astrophysics Data System (ADS)

    Hori, Takayuki; Ohya, Jun; Kurumisawa, Jun

    2009-02-01

    This paper proposes a Tensor Decomposition Based method that can recognize an unknown person's action from a video sequence, where the unknown person is not included in the database (tensor) used for the recognition. The tensor consists of persons, actions and time-series image features. For the observed unknown person's action, one of the actions stored in the tensor is assumed. Using the motion signature obtained from the assumption, the unknown person's actions are synthesized. The actions of one of the persons in the tensor are replaced by the synthesized actions. Then, the core tensor for the replaced tensor is computed. This process is repeated for the actions and persons. For each iteration, the difference between the replaced and original core tensors is computed. The assumption that gives the minimal difference is the action recognition result. For the time-series image features to be stored in the tensor and to be extracted from the observed video sequence, the human body silhouette's contour shape based feature is used. To show the validity of our proposed method, our proposed method is experimentally compared with Nearest Neighbor rule and Principal Component analysis based method. Experiments using 33 persons' seven kinds of action show that our proposed method achieves better recognition accuracies for the seven actions than the other methods.

  18. Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.

    PubMed

    Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce

    2018-06-15

    A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Hesitant Fuzzy Linguistic Preference Utility Set and Its Application in Selection of Fire Rescue Plans.

    PubMed

    Liao, Huchang; Si, Guangsen; Xu, Zeshui; Fujita, Hamido

    2018-04-03

    Hesitant fuzzy linguistic term set provides an effective tool to represent uncertain decision information. However, the semantics corresponding to the linguistic terms in it cannot accurately reflect the decision-makers' subjective cognition. In general, different decision-makers' sensitivities towards the semantics are different. Such sensitivities can be represented by the cumulative prospect theory value function. Inspired by this, we propose a linguistic scale function to transform the semantics corresponding to linguistic terms into the linguistic preference values. Furthermore, we propose the hesitant fuzzy linguistic preference utility set, based on which, the decision-makers can flexibly express their distinct semantics and obtain the decision results that are consistent with their cognition. For calculations and comparisons over the hesitant fuzzy linguistic preference utility sets, we introduce some distance measures and comparison laws. Afterwards, to apply the hesitant fuzzy linguistic preference utility sets in emergency management, we develop a method to obtain objective weights of attributes and then propose a hesitant fuzzy linguistic preference utility-TOPSIS method to select the best fire rescue plan. Finally, the validity of the proposed method is verified by some comparisons of the method with other two representative methods including the hesitant fuzzy linguistic-TOPSIS method and the hesitant fuzzy linguistic-VIKOR method.

  20. Maximum entropy method applied to deblurring images on a MasPar MP-1 computer

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Dorband, John; Busse, Tim

    1991-01-01

    A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.

  1. Technology selection for ballast water treatment by multi-stakeholders: A multi-attribute decision analysis approach based on the combined weights and extension theory.

    PubMed

    Ren, Jingzheng

    2018-01-01

    This objective of this study is to develop a generic multi-attribute decision analysis framework for ranking the technologies for ballast water treatment and determine their grades. An evaluation criteria system consisting of eight criteria in four categories was used to evaluate the technologies for ballast water treatment. The Best-Worst method, which is a subjective weighting method and Criteria importance through inter-criteria correlation method, which is an objective weighting method, were combined to determine the weights of the evaluation criteria. The extension theory was employed to prioritize the technologies for ballast water treatment and determine their grades. An illustrative case including four technologies for ballast water treatment, i.e. Alfa Laval (T 1 ), Hyde (T 2 ), Unitor (T 3 ), and NaOH (T 4 ), were studied by the proposed method, and the Hyde (T 2 ) was recognized as the best technology. Sensitivity analysis was also carried to investigate the effects of the combined coefficients and the weights of the evaluation criteria on the final priority order of the four technologies for ballast water treatment. The sum weighted method and the TOPSIS was also employed to rank the four technologies, and the results determined by these two methods are consistent to that determined by the proposed method in this study. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. 3D Measurement of Anatomical Cross-sections of Foot while Walking

    NASA Astrophysics Data System (ADS)

    Kimura, Makoto; Mochimaru, Masaaki; Kanade, Takeo

    Recently, techniques for measuring and modeling of human body are taking attention, because human models are useful for ergonomic design in manufacturing. We aim to measure accurate shape of human foot that will be useful for the design of shoes. For such purpose, shape measurement of foot in motion is obviously important, because foot shape in the shoe is deformed while walking or running. In this paper, we propose a method to measure anatomical cross-sections of foot while walking. No one had ever measured dynamic shape of anatomical cross-sections, though they are very basic and popular in the field of biomechanics. Our proposed method is based on multi-view stereo method. The target cross-sections are painted in individual colors (red, green, yellow and blue), and the proposed method utilizes the characteristic of target shape in the camera captured images. Several nonlinear conditions are introduced in the process to find the consistent correspondence in all images. Our desired accuracy is less than 1mm error, which is similar to the existing 3D scanners for static foot measurement. In our experiments, the proposed method achieved the desired accuracy.

  3. Labeled RFS-Based Track-Before-Detect for Multiple Maneuvering Targets in the Infrared Focal Plane Array.

    PubMed

    Li, Miao; Li, Jun; Zhou, Yiyu

    2015-12-08

    The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts-MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing.

  4. Labeled RFS-Based Track-Before-Detect for Multiple Maneuvering Targets in the Infrared Focal Plane Array

    PubMed Central

    Li, Miao; Li, Jun; Zhou, Yiyu

    2015-01-01

    The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts—MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing. PMID:26670234

  5. Consistent linguistic fuzzy preference relations method with ranking fuzzy numbers

    NASA Astrophysics Data System (ADS)

    Ridzuan, Siti Amnah Mohd; Mohamad, Daud; Kamis, Nor Hanimah

    2014-12-01

    Multi-Criteria Decision Making (MCDM) methods have been developed to help decision makers in selecting the best criteria or alternatives from the options given. One of the well known methods in MCDM is the Consistent Fuzzy Preference Relation (CFPR) method, essentially utilizes a pairwise comparison approach. This method was later improved to cater subjectivity in the data by using fuzzy set, known as the Consistent Linguistic Fuzzy Preference Relations (CLFPR). The CLFPR method uses the additive transitivity property in the evaluation of pairwise comparison matrices. However, the calculation involved is lengthy and cumbersome. To overcome this problem, a method of defuzzification was introduced by researchers. Nevertheless, the defuzzification process has a major setback where some information may lose due to the simplification process. In this paper, we propose a method of CLFPR that preserves the fuzzy numbers form throughout the process. In obtaining the desired ordering result, a method of ranking fuzzy numbers is utilized in the procedure. This improved procedure for CLFPR is implemented to a case study to verify its effectiveness. This method is useful for solving decision making problems and can be applied to many areas of applications.

  6. Kinematic modeling of a 7-degree of freedom spatial hybrid manipulator for medical surgery.

    PubMed

    Singh, Amanpreet; Singla, Ekta; Soni, Sanjeev; Singla, Ashish

    2018-01-01

    The prime objective of this work is to deal with the kinematics of spatial hybrid manipulators. In this direction, in 1955, Denavit and Hartenberg proposed a consistent and concise method, known as D-H parameters method, to deal with kinematics of open serial chains. From literature review, it is found that D-H parameter method is widely used to model manipulators consisting of lower pairs. However, the method leads to ambiguities when applied to closed-loop, tree-like and hybrid manipulators. Furthermore, in the dearth of any direct method to model closed-loop, tree-like and hybrid manipulators, revisions of this method have been proposed from time-to-time by different researchers. One such kind of revision using the concept of dummy frames has successfully been proposed and implemented by the authors on spatial hybrid manipulators. In that work, authors have addressed the orientational inconsistency of the D-H parameter method, restricted to body-attached frames only. In the current work, the condition of body-attached frames is relaxed and spatial frame attachment is considered to derive the kinematic model of a 7-degree of freedom spatial hybrid robotic arm, along with the development of closed-loop constraints. The validation of the new kinematic model has been performed with the help of a prototype of this 7-degree of freedom arm, which is being developed at Council of Scientific & Industrial Research-Central Scientific Instruments Organisation Chandigarh to aid the surgeon during a medical surgical task. Furthermore, the developed kinematic model is used to develop the first column of the Jacobian matrix, which helps in providing the estimate of the tip velocity of the 7-degree of freedom manipulator when the first joint velocity is known.

  7. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    NASA Astrophysics Data System (ADS)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  8. An efficient direct method for image registration of flat objects

    NASA Astrophysics Data System (ADS)

    Nikolaev, Dmitry; Tihonkih, Dmitrii; Makovetskii, Artyom; Voronin, Sergei

    2017-09-01

    Image alignment of rigid surfaces is a rapidly developing area of research and has many practical applications. Alignment methods can be roughly divided into two types: feature-based methods and direct methods. Known SURF and SIFT algorithms are examples of the feature-based methods. Direct methods refer to those that exploit the pixel intensities without resorting to image features and image-based deformations are general direct method to align images of deformable objects in 3D space. Nevertheless, it is not good for the registration of images of 3D rigid objects since the underlying structure cannot be directly evaluated. In the article, we propose a model that is suitable for image alignment of rigid flat objects under various illumination models. The brightness consistency assumptions used for reconstruction of optimal geometrical transformation. Computer simulation results are provided to illustrate the performance of the proposed algorithm for computing of an accordance between pixels of two images.

  9. Double-layered microstrip metamaterial beam scanning leaky wave antenna with consistent gain and low cross-polarization

    NASA Astrophysics Data System (ADS)

    An, Yong-li; Tan, Yi-li; Zhang, Hong-bo; Wu, Guo-cheng

    2017-12-01

    In this paper, a novel double-layered microstrip metamaterial beam scanning leaky wave antenna (LWA) is proposed and investigated to achieve consistent gain and low cross-polarization. Thanks to the continuous phase constant changing from negative to positive values over the passband of the double-layered microstrip metamaterial, the proposed LWA, which consists of 20 identical microstrip metamaterial unit cells, can obtain a continuous beam scanning property from backward to forward directions. The proposed LWA is fabricated and measured. The measured results show that the fabricated antenna obtains a continuous beam scanning angle of 140° over the operating frequency band of 3.80-5.25 GHz (32%), the measured 3 dB gain bandwidth is 30.17% with maximum gain of 11.7 dB. Besides, the measured cross-polarization of the fabricated antenna keeps at a level of at least 30 dB below the co-polarization across the entire radiation region. Moreover, the measured and simulated results are in good agreement with each other, indicating the significance and effectiveness of this method.

  10. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  11. Breast mass segmentation in mammograms combining fuzzy c-means and active contours

    NASA Astrophysics Data System (ADS)

    Hmida, Marwa; Hamrouni, Kamel; Solaiman, Basel; Boussetta, Sana

    2018-04-01

    Segmentation of breast masses in mammograms is a challenging issue due to the nature of mammography and the characteristics of masses. In fact, mammographic images are poor in contrast and breast masses have various shapes and densities with fuzzy and ill-defined borders. In this paper, we propose a method based on a modified Chan-Vese active contour model for mass segmentation in mammograms. We conduct the experiment on mass Regions of Interest (ROI) extracted from the MIAS database. The proposed method consists of mainly three stages: Firstly, the ROI is preprocessed to enhance the contrast. Next, two fuzzy membership maps are generated from the preprocessed ROI based on fuzzy C-Means algorithm. These fuzzy membership maps are finally used to modify the energy of the Chan-Vese model and to perform the final segmentation. Experimental results indicate that the proposed method yields good mass segmentation results.

  12. An approach to the interpretation of Cole-Davidson and Cole-Cole dielectric functions

    NASA Astrophysics Data System (ADS)

    Iglesias, T. P.; Vilão, G.; Reis, João Carlos R.

    2017-08-01

    Assuming that a dielectric sample can be described by Debye's model at each frequency, a method based on Cole's treatment is proposed for the direct estimation at experimental frequencies of relaxation times and the corresponding static and infinite-frequency permittivities. These quantities and the link between dielectric strength and mean molecular dipole moment at each frequency could be useful to analyze dielectric relaxation processes. The method is applied to samples that follow a Cole-Cole or a Cole-Davidson dielectric function. A physical interpretation of these dielectric functions is proposed. The behavior of relaxation time with frequency can be distinguished between the two dielectric functions. The proposed method can also be applied to samples following a Navriliak-Negami or any other dielectric function. The dielectric relaxation of a nanofluid consisting of graphene nanoparticles dispersed in the oil squalane is reported and discussed within the novel framework.

  13. Depth profile measurement with lenslet images of the plenoptic camera

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei

    2018-03-01

    An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.

  14. Shear wave mapping of skeletal muscle using shear wave wavefront reconstruction based on ultrasound color flow imaging

    NASA Astrophysics Data System (ADS)

    Yamakoshi, Yoshiki; Yamamoto, Atsushi; Kasahara, Toshihiro; Iijima, Tomohiro; Yuminaka, Yasushi

    2015-07-01

    We have proposed a quantitative shear wave imaging technique for continuous shear wave excitation. Shear wave wavefront is observed directly by color flow imaging using a general-purpose ultrasonic imaging system. In this study, the proposed method is applied to experiments in vivo, and shear wave maps, namely, the shear wave phase map, which shows the shear wave propagation inside the medium, and the shear wave velocity map, are observed for the skeletal muscle in the shoulder. To excite the shear wave inside the skeletal muscle of the shoulder, a hybrid ultrasonic wave transducer, which combines a small vibrator with an ultrasonic wave probe, is adopted. The shear wave velocity of supraspinatus muscle, which is measured by the proposed method, is 4.11 ± 0.06 m/s (N = 4). This value is consistent with those obtained by the acoustic radiation force impulse method.

  15. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Development of Support Service for Prevention and Recovery from Dementia and Science of Lethe

    NASA Astrophysics Data System (ADS)

    Otake, Mihoko

    Purpose of this study is to explore service design method through the development of support service for prevention and recovery from dementia towards science of lethe. We designed and implemented conversation support service via coimagination method based on multiscale service design method, both were proposed by the author. Multiscale service model consists of tool, event, human, network, style and rule. Service elements at different scales are developed according to the model. Interactive conversation supported by coimagination method activates cognitive functions so as to prevent progress of dementia. This paper proposes theoretical bases for science of lethe. Firstly, relationship among coimagination method and three cognitive functions including division of attention, planning, episodic memory which decline at mild cognitive imparement. Secondly, thought state transition model during conversation which describes cognitive enhancement via interactive communication. Thirdly, Set Theoretical Measure of Interaction is proposed for evaluating effectiveness of conversation to cognitive enhancement. Simulation result suggests that the ideas which cannot be explored by each speaker are explored during interactive conversation. Finally, coimagination method compared with reminiscence therapy and its possibility for collaboration is discussed.

  17. An extended algebraic reconstruction technique (E-ART) for dual spectral CT.

    PubMed

    Zhao, Yunsong; Zhao, Xing; Zhang, Peng

    2015-03-01

    Compared with standard computed tomography (CT), dual spectral CT (DSCT) has many advantages for object separation, contrast enhancement, artifact reduction, and material composition assessment. But it is generally difficult to reconstruct images from polychromatic projections acquired by DSCT, because of the nonlinear relation between the polychromatic projections and the images to be reconstructed. This paper first models the DSCT reconstruction problem as a nonlinear system problem; and then extend the classic ART method to solve the nonlinear system. One feature of the proposed method is its flexibility. It fits for any scanning configurations commonly used and does not require consistent rays for different X-ray spectra. Another feature of the proposed method is its high degree of parallelism, which means that the method is suitable for acceleration on GPUs (graphic processing units) or other parallel systems. The method is validated with numerical experiments from simulated noise free and noisy data. High quality images are reconstructed with the proposed method from the polychromatic projections of DSCT. The reconstructed images are still satisfactory even if there are certain errors in the estimated X-ray spectra.

  18. Vectorized Jiles-Atherton hysteresis model

    NASA Astrophysics Data System (ADS)

    Szymański, Grzegorz; Waszak, Michał

    2004-01-01

    This paper deals with vector hysteresis modeling. A vector model consisting of individual Jiles-Atherton components placed along principal axes is proposed. The cross-axis coupling ensures general vector model properties. Minor loops are obtained using scaling method. The model is intended for efficient finite element method computations defined in terms of magnetic vector potential. Numerical efficiency is ensured by differential susceptibility approach.

  19. Staged marginal contoured and central excision technique in the surgical management of perianal Paget's disease.

    PubMed

    Möller, Mecker G; Lugo-Baruqui, Jose Alejandro; Milikowski, Clara; Salgado, Christopher J

    2014-04-01

    Extramammary Paget's disease (EMPD) is an adenocarcinoma of the apocrine glands with unknown exact prevalence and obscure etiology. It has been divided into primary EMPD and secondary EMPD, in which an internal malignancy is usually associated. Treatment for primary EMPD usually consists of wide lesion excision with negative margins. Multiple methods have been proposed to obtain free-margin status of the disease. These include visible border lesion excision, punch biopsies, and micrographic and frozen-section surgery, with different results but still high recurrence rates. The investigators propose a method consisting of a staged contoured marginal excision using "en face" permanent pathologic analysis preceding the steps of central excision of the lesion and the final reconstruction of the surgical defect. Advantages of this method include adequate margin control allowing final reconstruction and tissue preservation, while minimizing patient discomfort. The staged contoured marginal and central excision technique offers a new alternative to the armamentarium for surgical oncologists for the management of EMPD in which margin control is imperative for control of recurrence rates. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Integrated carbon and chlorine isotope modeling: applications to chlorinated aliphatic hydrocarbons dechlorination.

    PubMed

    Jin, Biao; Haderlein, Stefan B; Rolle, Massimo

    2013-02-05

    We propose a self-consistent method to predict the evolution of carbon and chlorine isotope ratios during degradation of chlorinated hydrocarbons. The method treats explicitly the cleavage of isotopically different C-Cl bonds and thus considers, simultaneously, combined carbon-chlorine isotopologues. To illustrate the proposed modeling approach we focus on the reductive dehalogenation of chlorinated ethenes. We compare our method with the currently available approach, in which carbon and chlorine isotopologues are treated separately. The new approach provides an accurate description of dual-isotope effects regardless of the extent of the isotope fractionation and physical characteristics of the experimental system. We successfully applied the new approach to published experimental results on dehalogenation of chlorinated ethenes both in well-mixed systems and in situations where mass-transfer limitations control the overall rate of biodegradation. The advantages of our self-consistent dual isotope modeling approach proved to be most evident when isotope fractionation factors of carbon and chlorine differed significantly and for systems with mass-transfer limitations, where both physical and (bio)chemical transformation processes affect the observed isotopic values.

  1. Multibody dynamic analysis using a rotation-free shell element with corotational frame

    NASA Astrophysics Data System (ADS)

    Shi, Jiabei; Liu, Zhuyong; Hong, Jiazhen

    2018-03-01

    Rotation-free shell formulation is a simple and effective method to model a shell with large deformation. Moreover, it can be compatible with the existing theories of finite element method. However, a rotation-free shell is seldom employed in multibody systems. Using a derivative of rigid body motion, an efficient nonlinear shell model is proposed based on the rotation-free shell element and corotational frame. The bending and membrane strains of the shell have been simplified by isolating deformational displacements from the detailed description of rigid body motion. The consistent stiffness matrix can be obtained easily in this form of shell model. To model the multibody system consisting of the presented shells, joint kinematic constraints including translational and rotational constraints are deduced in the context of geometric nonlinear rotation-free element. A simple node-to-surface contact discretization and penalty method are adopted for contacts between shells. A series of analyses for multibody system dynamics are presented to validate the proposed formulation. Furthermore, the deployment of a large scaled solar array is presented to verify the comprehensive performance of the nonlinear shell model.

  2. Magnetic air capsule robotic system: proof of concept of a novel approach for painless colonoscopy.

    PubMed

    Valdastri, P; Ciuti, G; Verbeni, A; Menciassi, A; Dario, P; Arezzo, A; Morino, M

    2012-05-01

    Despite being considered the most effective method for colorectal cancer diagnosis, colonoscopy take-up as a mass-screening procedure is limited mainly due to invasiveness, patient discomfort, fear of pain, and the need for sedation. In an effort to mitigate some of the disadvantages associated with colonoscopy, this work provides a preliminary assessment of a novel endoscopic device consisting in a softly tethered capsule for painless colonoscopy under robotic magnetic steering. The proposed platform consists of the endoscopic device, a robotic unit, and a control box. In contrast to the traditional insertion method (i.e., pushing from behind), a "front-wheel" propulsion approach is proposed. A compliant tether connecting the device to an external box is used to provide insufflation, passing a flexible operative tool, enabling lens cleaning, and operating the vision module. To assess the diagnostic and treatment ability of the platform, 12 users were asked to find and remove artificially implanted beads as polyp surrogates in an ex vivo model. In vivo testing consisted of a qualitative study of the platform in pigs, focusing on active locomotion, diagnostic and therapeutic capabilities, safety, and usability. The mean percentage of beads identified by each user during ex vivo trials was 85 ± 11%. All the identified beads were removed successfully using the polypectomy loop. The mean completion time for accomplishing the entire procedure was 678 ± 179 s. No immediate mucosal damage, acute complications such as perforation, or delayed adverse consequences were observed following application of the proposed method in vivo. Use of the proposed platform in ex vivo and preliminary animal studies indicates that it is safe and operates effectively in a manner similar to a standard colonoscope. These studies served to demonstrate the platform's added advantages of reduced size, front-wheel drive strategy, and robotic control over locomotion and orientation.

  3. Delay grid multiplexing: simple time-based multiplexing and readout method for silicon photomultipliers

    NASA Astrophysics Data System (ADS)

    Won, Jun Yeon; Ko, Guen Bae; Lee, Jae Sung

    2016-10-01

    In this paper, we propose a fully time-based multiplexing and readout method that uses the principle of the global positioning system. Time-based multiplexing allows simplifying the multiplexing circuits where the only innate traces that connect the signal pins of the silicon photomultiplier (SiPM) channels to the readout channels are used as the multiplexing circuit. Every SiPM channel is connected to the delay grid that consists of the traces on a printed circuit board, and the inherent transit times from each SiPM channel to the readout channels encode the position information uniquely. Thus, the position of each SiPM can be identified using the time difference of arrival (TDOA) measurements. The proposed multiplexing can also allow simplification of the readout circuit using the time-to-digital converter (TDC) implemented in a field-programmable gate array (FPGA), where the time-over-threshold (ToT) is used to extract the energy information after multiplexing. In order to verify the proposed multiplexing method, we built a positron emission tomography (PET) detector that consisted of an array of 4  ×  4 LGSO crystals, each with a dimension of 3  ×  3  ×  20 mm3, and one- to-one coupled SiPM channels. We first employed the waveform sampler as an initial study, and then replaced the waveform sampler with an FPGA-TDC to further simplify the readout circuits. The 16 crystals were clearly resolved using only the time information obtained from the four readout channels. The coincidence resolving times (CRTs) were 382 and 406 ps FWHM when using the waveform sampler and the FPGA-TDC, respectively. The proposed simple multiplexing and readout methods can be useful for time-of-flight (TOF) PET scanners.

  4. Imperial Valley's proposal to develop a guide for geothermal development within its county

    NASA Technical Reports Server (NTRS)

    Pierson, D. E.

    1974-01-01

    A plan to develop the geothermal resources of the Imperial Valley of California is presented. The plan consists of development policies and includes text and graphics setting forth the objectives, principles, standards, and proposals. The plan allows developers to know the goals of the surrounding community and provides a method for decision making to be used by county representatives. A summary impact statement for the geothermal development aspects is provided.

  5. Atlas-based liver segmentation and hepatic fat-fraction assessment for clinical trials.

    PubMed

    Yan, Zhennan; Zhang, Shaoting; Tan, Chaowei; Qin, Hongxing; Belaroussi, Boubakeur; Yu, Hui Jing; Miller, Colin; Metaxas, Dimitris N

    2015-04-01

    Automated assessment of hepatic fat-fraction is clinically important. A robust and precise segmentation would enable accurate, objective and consistent measurement of hepatic fat-fraction for disease quantification, therapy monitoring and drug development. However, segmenting the liver in clinical trials is a challenging task due to the variability of liver anatomy as well as the diverse sources the images were acquired from. In this paper, we propose an automated and robust framework for liver segmentation and assessment. It uses single statistical atlas registration to initialize a robust deformable model to obtain fine segmentation. Fat-fraction map is computed by using chemical shift based method in the delineated region of liver. This proposed method is validated on 14 abdominal magnetic resonance (MR) volumetric scans. The qualitative and quantitative comparisons show that our proposed method can achieve better segmentation accuracy with less variance comparing with two other atlas-based methods. Experimental results demonstrate the promises of our assessment framework. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Effective Heart Disease Detection Based on Quantitative Computerized Traditional Chinese Medicine Using Representation Based Classifiers.

    PubMed

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    At present, heart disease is the number one cause of death worldwide. Traditionally, heart disease is commonly detected using blood tests, electrocardiogram, cardiac computerized tomography scan, cardiac magnetic resonance imaging, and so on. However, these traditional diagnostic methods are time consuming and/or invasive. In this paper, we propose an effective noninvasive computerized method based on facial images to quantitatively detect heart disease. Specifically, facial key block color features are extracted from facial images and analyzed using the Probabilistic Collaborative Representation Based Classifier. The idea of facial key block color analysis is founded in Traditional Chinese Medicine. A new dataset consisting of 581 heart disease and 581 healthy samples was experimented by the proposed method. In order to optimize the Probabilistic Collaborative Representation Based Classifier, an analysis of its parameters was performed. According to the experimental results, the proposed method obtains the highest accuracy compared with other classifiers and is proven to be effective at heart disease detection.

  7. A Local Agreement Pattern Measure Based on Hazard Functions for Survival Outcomes

    PubMed Central

    Dai, Tian; Guo, Ying; Peng, Limin; Manatunga, Amita K.

    2017-01-01

    Summary Assessing agreement is often of interest in biomedical and clinical research when measurements are obtained on the same subjects by different raters or methods. Most classical agreement methods have been focused on global summary statistics, which cannot be used to describe various local agreement patterns. The objective of this work is to study the local agreement pattern between two continuous measurements subject to censoring. In this paper, we propose a new agreement measure based on bivariate hazard functions to characterize the local agreement pattern between two correlated survival outcomes. The proposed measure naturally accommodates censored observations, fully captures the dependence structure between bivariate survival times and provides detailed information on how the strength of agreement evolves over time. We develop a nonparametric estimation method for the proposed local agreement pattern measure and study theoretical properties including strong consistency and asymptotical normality. We then evaluate the performance of the estimator through simulation studies and illustrate the method using a prostate cancer data example. PMID:28724196

  8. A local agreement pattern measure based on hazard functions for survival outcomes.

    PubMed

    Dai, Tian; Guo, Ying; Peng, Limin; Manatunga, Amita K

    2018-03-01

    Assessing agreement is often of interest in biomedical and clinical research when measurements are obtained on the same subjects by different raters or methods. Most classical agreement methods have been focused on global summary statistics, which cannot be used to describe various local agreement patterns. The objective of this work is to study the local agreement pattern between two continuous measurements subject to censoring. In this article, we propose a new agreement measure based on bivariate hazard functions to characterize the local agreement pattern between two correlated survival outcomes. The proposed measure naturally accommodates censored observations, fully captures the dependence structure between bivariate survival times and provides detailed information on how the strength of agreement evolves over time. We develop a nonparametric estimation method for the proposed local agreement pattern measure and study theoretical properties including strong consistency and asymptotical normality. We then evaluate the performance of the estimator through simulation studies and illustrate the method using a prostate cancer data example. © 2017, The International Biometric Society.

  9. Competitive Deep-Belief Networks for Underwater Acoustic Target Recognition

    PubMed Central

    Shen, Sheng; Yao, Xiaohui; Sheng, Meiping; Wang, Chen

    2018-01-01

    Underwater acoustic target recognition based on ship-radiated noise belongs to the small-sample-size recognition problems. A competitive deep-belief network is proposed to learn features with more discriminative information from labeled and unlabeled samples. The proposed model consists of four stages: (1) A standard restricted Boltzmann machine is pretrained using a large number of unlabeled data to initialize its parameters; (2) the hidden units are grouped according to categories, which provides an initial clustering model for competitive learning; (3) competitive training and back-propagation algorithms are used to update the parameters to accomplish the task of clustering; (4) by applying layer-wise training and supervised fine-tuning, a deep neural network is built to obtain features. Experimental results show that the proposed method can achieve classification accuracy of 90.89%, which is 8.95% higher than the accuracy obtained by the compared methods. In addition, the highest accuracy of our method is obtained with fewer features than other methods. PMID:29570642

  10. Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery.

    PubMed

    Liu, Han; Wang, Lie; Zhao, Tuo

    2015-08-01

    We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O (1/ ϵ ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/.

  11. Color image segmentation with support vector machines: applications to road signs detection.

    PubMed

    Cyganek, Bogusław

    2008-08-01

    In this paper we propose efficient color segmentation method which is based on the Support Vector Machine classifier operating in a one-class mode. The method has been developed especially for the road signs recognition system, although it can be used in other applications. The main advantage of the proposed method comes from the fact that the segmentation of characteristic colors is performed not in the original but in the higher dimensional feature space. By this a better data encapsulation with a linear hypersphere can be usually achieved. Moreover, the classifier does not try to capture the whole distribution of the input data which is often difficult to achieve. Instead, the characteristic data samples, called support vectors, are selected which allow construction of the tightest hypersphere that encloses majority of the input data. Then classification of a test data simply consists in a measurement of its distance to a centre of the found hypersphere. The experimental results show high accuracy and speed of the proposed method.

  12. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  13. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  14. Script-independent text line segmentation in freestyle handwritten documents.

    PubMed

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  15. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  16. Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions

    PubMed Central

    Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.

    2010-01-01

    Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256

  17. Skeleton-based human action recognition using multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Ding, Wenwen; Liu, Kai; Cheng, Fei; Zhang, Jin; Li, YunSong

    2015-05-01

    Human action recognition and analysis is an active research topic in computer vision for many years. This paper presents a method to represent human actions based on trajectories consisting of 3D joint positions. This method first decompose action into a sequence of meaningful atomic actions (actionlets), and then label actionlets with English alphabets according to the Davies-Bouldin index value. Therefore, an action can be represented using a sequence of actionlet symbols, which will preserve the temporal order of occurrence of each of the actionlets. Finally, we employ sequence comparison to classify multiple actions through using string matching algorithms (Needleman-Wunsch). The effectiveness of the proposed method is evaluated on datasets captured by commodity depth cameras. Experiments of the proposed method on three challenging 3D action datasets show promising results.

  18. Calibration-free absolute frequency response measurement of directly modulated lasers based on additional modulation.

    PubMed

    Zhang, Shangjian; Zou, Xinhai; Wang, Heng; Zhang, Yali; Lu, Rongguo; Liu, Yong

    2015-10-15

    A calibration-free electrical method is proposed for measuring the absolute frequency response of directly modulated semiconductor lasers based on additional modulation. The method achieves the electrical domain measurement of the modulation index of directly modulated lasers without the need for correcting the responsivity fluctuation in the photodetection. Moreover, it doubles measuring frequency range by setting a specific frequency relationship between the direct and additional modulation. Both the absolute and relative frequency response of semiconductor lasers are experimentally measured from the electrical spectrum of the twice-modulated optical signal, and the measured results are compared to those obtained with conventional methods to check the consistency. The proposed method provides calibration-free and accurate measurement for high-speed semiconductor lasers with high-resolution electrical spectrum analysis.

  19. Fitted Fourier-pseudospectral methods for solving a delayed reaction-diffusion partial differential equation in biology

    NASA Astrophysics Data System (ADS)

    Adam, A. M. A.; Bashier, E. B. M.; Hashim, M. H. A.; Patidar, K. C.

    2017-07-01

    In this work, we design and analyze a fitted numerical method to solve a reaction-diffusion model with time delay, namely, a delayed version of a population model which is an extension of the logistic growth (LG) equation for a food-limited population proposed by Smith [F.E. Smith, Population dynamics in Daphnia magna and a new model for population growth, Ecology 44 (1963) 651-663]. Seeing that the analytical solution (in closed form) is hard to obtain, we seek for a robust numerical method. The method consists of a Fourier-pseudospectral semi-discretization in space and a fitted operator implicit-explicit scheme in temporal direction. The proposed method is analyzed for convergence and we found that it is unconditionally stable. Illustrative numerical results will be presented at the conference.

  20. Measuring magnetic field vector by stimulated Raman transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Wenli; Wei, Rong, E-mail: weirong@siom.ac.cn; Lin, Jinda

    2016-03-21

    We present a method for measuring the magnetic field vector in an atomic fountain by probing the line strength of stimulated Raman transitions. The relative line strength for a Λ-type level system with an existing magnetic field is theoretically analyzed. The magnetic field vector measured by our proposed method is consistent well with that by the traditional bias magnetic field method with an axial resolution of 6.1 mrad and a radial resolution of 0.16 rad. Dependences of the Raman transitions on laser polarization schemes are also analyzed. Our method offers the potential advantages for magnetic field measurement without requiring additional bias fields,more » beyond the limitation of magnetic field intensity, and extending the spatial measurement range. The proposed method can be widely used for measuring magnetic field vector in other precision measurement fields.« less

  1. Impact of Positive Emotions Enhancement on Physiological Processes and Psychological Functioning in Military Pilots

    DTIC Science & Technology

    2009-10-01

    8 weeks. The experimental procedure consisted in collecting (i) psychological data (resilience, well-being, anxiety ), (ii) 12h-night urines to assess...was performed during 6 to 8 weeks. The experimental procedure consisted in collecting (i) psychological data (resilience, well-being, anxiety ), (ii...cardio- vascular regulation, the spectral analysis of heart rate variability ( HRV ) analysis is usually proposed as a method to assess vagal tone [7,2,8

  2. Multitarget detection algorithm for automotive FMCW radar

    NASA Astrophysics Data System (ADS)

    Hyun, Eugin; Oh, Woo-Jin; Lee, Jong-Hun

    2012-06-01

    Today, 77 GHz FMCW (Frequency Modulation Continuous Wave) radar has strong advantages of range and velocity detection for automotive applications. However, FMCW radar brings out ghost targets and missed targets in multi-target situations. In this paper, in order to resolve these limitations, we propose an effective pairing algorithm, which consists of two steps. In the proposed method, a waveform with different slopes in two periods is used. In the 1st pairing processing, all combinations of range and velocity are obtained in each of two wave periods. In the 2nd pairing step, using the results of the 1st pairing processing, fine range and velocity are detected. In that case, we propose the range-velocity windowing technique in order to compensate for the non-ideal beat-frequency characteristic that arises due to the non-linearity of the RF module. Based on experimental results, the performance of the proposed algorithm is improved compared with that of the typical method.

  3. Ground-based full-sky imaging polarimeter based on liquid crystal variable retarders.

    PubMed

    Zhang, Ying; Zhao, Huijie; Song, Ping; Shi, Shaoguang; Xu, Wujian; Liang, Xiao

    2014-04-07

    A ground-based full-sky imaging polarimeter based on liquid crystal variable retarders (LCVRs) is proposed in this paper. Our proposed method can be used to realize the rapid detection of the skylight polarization information with hemisphere field-of-view for the visual band. The characteristics of the incidence angle of light on the LCVR are investigated, based on the electrically controlled birefringence. Then, the imaging polarimeter with hemisphere field-of-view is designed. Furthermore, the polarization calibration method with the field-of-view multiplexing and piecewise linear fitting is proposed, based on the rotation symmetry of the polarimeter. The polarization calibration of the polarimeter is implemented with the hemisphere field-of-view. This imaging polarimeter is investigated by the experiment of detecting the skylight image. The consistency between the obtained experimental distribution of polarization angle with that due to Rayleigh scattering model is 90%, which confirms the effectivity of our proposed imaging polarimeter.

  4. Ultrahigh-Dimensional Multiclass Linear Discriminant Analysis by Pairwise Sure Independence Screening

    PubMed Central

    Pan, Rui; Wang, Hansheng; Li, Runze

    2016-01-01

    This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. PMID:28127109

  5. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    PubMed

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  6. Multiple imputation for cure rate quantile regression with censored data.

    PubMed

    Wu, Yuanshan; Yin, Guosheng

    2017-03-01

    The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.

  7. Research on the development of space target detecting system and three-dimensional reconstruction technology

    NASA Astrophysics Data System (ADS)

    Li, Dong; Wei, Zhen; Song, Dawei; Sun, Wenfeng; Fan, Xiaoyan

    2016-11-01

    With the development of space technology, the number of spacecrafts and debris are increasing year by year. The demand for detecting and identification of spacecraft is growing strongly, which provides support to the cataloguing, crash warning and protection of aerospace vehicles. The majority of existing approaches for three-dimensional reconstruction is scattering centres correlation, which is based on the radar high resolution range profile (HRRP). This paper proposes a novel method to reconstruct the threedimensional scattering centre structure of target from a sequence of radar ISAR images, which mainly consists of three steps. First is the azimuth scaling of consecutive ISAR images based on fractional Fourier transform (FrFT). The later is the extraction of scattering centres and matching between adjacent ISAR images using grid method. Finally, according to the coordinate matrix of scattering centres, the three-dimensional scattering centre structure is reconstructed using improved factorization method. The three-dimensional structure is featured with stable and intuitive characteristic, which provides a new way to improve the identification probability and reduce the complexity of the model matching library. A satellite model is reconstructed using the proposed method from four consecutive ISAR images. The simulation results prove that the method has gotten a satisfied consistency and accuracy.

  8. An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image

    NASA Astrophysics Data System (ADS)

    Yu, Zhijie; Yu, Hui; Wang, Chen-sheng

    2014-11-01

    Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.

  9. Improved look-up table method of computer-generated holograms.

    PubMed

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  10. A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning.

    PubMed

    Li, Xin; Zhang, Peng; Guo, Jiming; Wang, Jinling; Qiu, Weining

    2017-04-21

    Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first order linear truncation error cannot be ignored, and a new AR method based on the idea of the ambiguity function method (AFM) is proposed in this paper. The proposed method is a single-epoch and nonlinear method that is especially well-suited for indoor pseudolite positioning. Considering the very low computational efficiency of conventional AFM, we adopt an improved particle swarm optimization (IPSO) algorithm to search for the best solution in the coordinate domain, and variances of a least squares adjustment is conducted to ensure the reliability of the solving ambiguity. Several experiments, including static and kinematic tests, are conducted to verify the validity of the proposed AR method. Numerical results show that the IPSO significantly improved the computational efficiency of AFM and has a more elaborate search ability compared to the conventional grid searching method. For the indoor pseudolite system, which had an initial approximate coordinate precision better than 0.2 m, the AFM exhibited good performances in both static and kinematic tests. With the corrected ambiguity gained from our proposed method, indoor pseudolite positioning can achieve centimeter-level precision using a low-cost single-frequency software receiver.

  11. Examining the impact of harmonic correlation on vibrational frequencies calculated in localized coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson-Heine, Magnus W. D., E-mail: magnus.hansonheine@nottingham.ac.uk

    Carefully choosing a set of optimized coordinates for performing vibrational frequency calculations can significantly reduce the anharmonic correlation energy from the self-consistent field treatment of molecular vibrations. However, moving away from normal coordinates also introduces an additional source of correlation energy arising from mode-coupling at the harmonic level. The impact of this new component of the vibrational energy is examined for a range of molecules, and a method is proposed for correcting the resulting self-consistent field frequencies by adding the full coupling energy from connected pairs of harmonic and pseudoharmonic modes, termed vibrational self-consistent field (harmonic correlation). This approach ismore » found to lift the vibrational degeneracies arising from coordinate optimization and provides better agreement with experimental and benchmark frequencies than uncorrected vibrational self-consistent field theory without relying on traditional correlated methods.« less

  12. Efficient airport detection using region-based fully convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao

    2018-04-01

    This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.

  13. Efficient generation of holographic news ticker in holographic 3DTV

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Cheol; Kim, Eun-Soo

    2009-08-01

    News ticker is used to show breaking news or news headlines in conventional 2-D broadcasting system. For the case of the breaking news, the fast creation is need, because the information should be sent quickly. In addition, if holographic 3- D broadcasting system is started in the future, news ticker will remain. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic news ticker in holographic 3DTV or 3-D movies using N-LUT method. The proposed method is largely consisted of five steps: construction of the LUT for each character, extraction of characters in news ticker, generation and shift of the CGH pattern for news ticker using the LUT for each character, composition of hologram pattern for 3-D video and hologram pattern for news ticker and reconstruct the holographic 3D video with news ticker. To confirm the proposed method, moving car in front of the castle is used as a 3D video and the words 'HOLOGRAM CAPTION GENERATOR' is used as a news ticker. From this simulation results confirmed the feasibility of the proposed method in fast generation of CGH patterns for holographic captions.

  14. Prediction of paroxysmal atrial fibrillation using recurrence plot-based features of the RR-interval signal.

    PubMed

    Mohebbi, Maryam; Ghassemian, Hassan

    2011-08-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia and increases the risk of stroke. Predicting the onset of paroxysmal AF (PAF), based on noninvasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic intervention and to minimize risks for the patients. In this paper, we propose an effective PAF predictor which is based on the analysis of the RR-interval signal. This method consists of three steps: preprocessing, feature extraction and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the RR-interval signal is extracted. In the next step, the recurrence plot (RP) of the RR-interval signal is obtained and five statistically significant features are extracted to characterize the basic patterns of the RP. These features consist of the recurrence rate, length of longest diagonal segments (L(max )), average length of the diagonal lines (L(mean)), entropy, and trapping time. Recurrence quantification analysis can reveal subtle aspects of dynamics not easily appreciated by other methods and exhibits characteristic patterns which are caused by the typical dynamical behavior. In the final step, a support vector machine (SVM)-based classifier is used for PAF prediction. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database (AFPDB) which consists of both 30 min ECG recordings that end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, positive predictivity and negative predictivity were 97%, 100%, 100%, and 96%, respectively. The proposed methodology presents better results than other existing approaches.

  15. The radioactive ion beams facility project for the legnaro laboratories

    NASA Astrophysics Data System (ADS)

    Tecchio, Luigi B.

    1999-04-01

    In the frame work of the Italian participation to the project of a high intensity proton facility for the energy amplifier and nuclear waste transmutations, LNL is involving in the design and construction of prototypes of the injection system of the 1 GeV linac that consists of a RFQ (5 MeV, 30 mA) followed by a 100 MeV linac. This program has been already financially supported and the work is actually in progress. In this context, the LNL has been proposed a project for the construction of a second generation facility for the production of radioactive ion beams (RIBs) by using the ISOL method. The final goal consists in the production of neutron rich RIBs with masses ranging from 80 to 160 by using primary beams of protons, deuterons and light ions with energy of 100 MeV and 100 kW power. This project is proposed to be developed in about 10 years from now and intermediate milestones and experiments are foreseen and under consideration for the next INFN five year plan (1999-2003). In such period of time is proposed the construction of a proton/deuteron accelerator of 10 MeV energy and 10 mA current, consisting of a RFQ (5 MeV, 30 mA) and a linac (10 MeV, 10 mA), and of a neutron area dedicated to the RIBs production, to the BNCT applications and to the neutron physics. Some remarks on the production methods will be presented. The possibility of producing radioisotopes by means of the fission induced by neutrons will be investigated and the methods of production of neutrons will be discussed.

  16. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images

    PubMed Central

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-01-01

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596

  17. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.

    PubMed

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-12-12

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.

  18. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  19. a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments

    NASA Astrophysics Data System (ADS)

    Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.

    2017-09-01

    We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.

  20. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  1. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    PubMed

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Prospective and retrospective high order eddy current mitigation for diffusion weighted echo planar imaging.

    PubMed

    Xu, Dan; Maier, Joseph K; King, Kevin F; Collick, Bruce D; Wu, Gaohong; Peters, Robert D; Hinks, R Scott

    2013-11-01

    The proposed method is aimed at reducing eddy current (EC) induced distortion in diffusion weighted echo planar imaging, without the need to perform further image coregistration between diffusion weighted and T2 images. These ECs typically have significant high order spatial components that cannot be compensated by preemphasis. High order ECs are first calibrated at the system level in a protocol independent fashion. The resulting amplitudes and time constants of high order ECs can then be used to calculate imaging protocol specific corrections. A combined prospective and retrospective approach is proposed to apply correction during data acquisition and image reconstruction. Various phantom, brain, body, and whole body diffusion weighted images with and without the proposed method are acquired. Significantly reduced image distortion and misregistration are consistently seen in images with the proposed method compared with images without. The proposed method is a powerful (e.g., effective at 48 cm field of view and 30 cm slice coverage) and flexible (e.g., compatible with other image enhancements and arbitrary scan plane) technique to correct high order ECs induced distortion and misregistration for various diffusion weighted echo planar imaging applications, without the need for further image post processing, protocol dependent prescan, or sacrifice in signal-to-noise ratio. Copyright © 2013 Wiley Periodicals, Inc.

  3. Design of optical seven-segment decoder using Pockel's effect inside lithium niobate-based waveguide

    NASA Astrophysics Data System (ADS)

    Pal, Amrindra; Kumar, Santosh; Sharma, Sandeep

    2017-01-01

    Seven-segment decoder is a device that allows placing digital information from many inputs to many outputs optically, having 11 Mach-Zehnder interferometers (MZIs) for their implementation. The layout of the circuit is implemented to fit the electrical method on an optical logic circuit based on the beam propagation method (BPM). Seven-segment decoder is proposed using electro-optic effect inside lithium niobate-based MZIs. MZI structures are able to switch an optical signal to a desired output port. It consists of a mathematical explanation about the proposed device. The BPM is also used to analyze the study.

  4. Achieving bifunctional cloak via combination of passive and active schemes

    NASA Astrophysics Data System (ADS)

    Lan, Chuwen; Bi, Ke; Gao, Zehua; Li, Bo; Zhou, Ji

    2016-11-01

    In this study, a simple and delicate approach to realizing manipulation of multi-physics field simultaneously through combination of passive and active schemes is proposed. In the design, one physical field is manipulated with passive scheme while the other with active scheme. As a proof of this concept, a bifunctional device is designed and fabricated to behave as electric and thermal invisibility cloak simultaneously. It is found that the experimental results are consistent with the simulated ones well, confirming the feasibility of our method. Furthermore, the proposed method could also be extended to other multi-physics fields, which might lead to potential applications in thermal, electric, and acoustic areas.

  5. Optic disk localization by a robust fusion method

    NASA Astrophysics Data System (ADS)

    Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin

    2013-02-01

    The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.

  6. Faint Debris Detection by Particle Based Track-Before-Detect Method

    NASA Astrophysics Data System (ADS)

    Uetsuhara, M.; Ikoma, N.

    2014-09-01

    This study proposes a particle method to detect faint debris, which is hardly seen in single frame, from an image sequence based on the concept of track-before-detect (TBD). The most widely used detection method is detect-before-track (DBT), which firstly detects signals of targets from single frame by distinguishing difference of intensity between foreground and background then associate the signals for each target between frames. DBT is capable of tracking bright targets but limited. DBT is necessary to consider presence of false signals and is difficult to recover from false association. On the other hand, TBD methods try to track targets without explicitly detecting the signals followed by evaluation of goodness of each track and obtaining detection results. TBD has an advantage over DBT in detecting weak signals around background level in single frame. However, conventional TBD methods for debris detection apply brute-force search over candidate tracks then manually select true one from the candidates. To reduce those significant drawbacks of brute-force search and not-fully automated process, this study proposes a faint debris detection algorithm by a particle based TBD method consisting of sequential update of target state and heuristic search of initial state. The state consists of position, velocity direction and magnitude, and size of debris over the image at a single frame. The sequential update process is implemented by a particle filter (PF). PF is an optimal filtering technique that requires initial distribution of target state as a prior knowledge. An evolutional algorithm (EA) is utilized to search the initial distribution. The EA iteratively applies propagation and likelihood evaluation of particles for the same image sequences and resulting set of particles is used as an initial distribution of PF. This paper describes the algorithm of the proposed faint debris detection method. The algorithm demonstrates performance on image sequences acquired during observation campaigns dedicated to GEO breakup fragments, which would contain a sufficient number of faint debris images. The results indicate the proposed method is capable of tracking faint debris with moderate computational costs at operational level.

  7. Standardless quantification by parameter optimization in electron probe microanalysis

    NASA Astrophysics Data System (ADS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-11-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.

  8. Full-frame video stabilization with motion inpainting.

    PubMed

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  9. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  10. Classification of Hyperspectral Data Based on Guided Filtering and Random Forest

    NASA Astrophysics Data System (ADS)

    Ma, H.; Feng, W.; Cao, X.; Wang, L.

    2017-09-01

    Hyperspectral images usually consist of more than one hundred spectral bands, which have potentials to provide rich spatial and spectral information. However, the application of hyperspectral data is still challengeable due to "the curse of dimensionality". In this context, many techniques, which aim to make full use of both the spatial and spectral information, are investigated. In order to preserve the geometrical information, meanwhile, with less spectral bands, we propose a novel method, which combines principal components analysis (PCA), guided image filtering and the random forest classifier (RF). In detail, PCA is firstly employed to reduce the dimension of spectral bands. Secondly, the guided image filtering technique is introduced to smooth land object, meanwhile preserving the edge of objects. Finally, the features are fed into RF classifier. To illustrate the effectiveness of the method, we carry out experiments over the popular Indian Pines data set, which is collected by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. By comparing the proposed method with the method of only using PCA or guided image filter, we find that effect of the proposed method is better.

  11. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles

    PubMed Central

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  12. Optical and electrical characterization methods of plasma-induced damage in silicon nitride films

    NASA Astrophysics Data System (ADS)

    Kuyama, Tomohiro; Eriguchi, Koji

    2018-06-01

    We proposed evaluation methods of plasma-induced damage (PID) in silicon nitride (SiN) films. The formation of an oxide layer by air exposure was identified for damaged SiN films by X-ray photoelectron spectroscopy (XPS). Bruggeman’s effective medium approximation was employed for an optical model consisting of damaged and undamaged layers, which is applicable to an in-line monitoring by spectroscopic ellipsometry (SE). The optical thickness of the damaged layer — an oxidized layer — extended after plasma exposure, which was consistent with the results obtained by a diluted hydrofluoric acid (DHF) wet etching. The change in the conduction band edge of the damaged SiN films was presumed from two electrical techniques, i.e., current–voltage (I–V) measurement and time-dependent dielectric breakdown (TDDB) test with a constant voltage stress. The proposed techniques can be used for assigning the plasma-induced structural change in an SiN film widely used as an etch-protecting layer.

  13. A Method of Retrospective Computerized System Validation for Drug Manufacturing Software Considering Modifications

    NASA Astrophysics Data System (ADS)

    Takahashi, Masakazu; Fukue, Yoshinori

    This paper proposes a Retrospective Computerized System Validation (RCSV) method for Drug Manufacturing Software (DMSW) that relates to drug production considering software modification. Because DMSW that is used for quality management and facility control affects big impact to quality of drugs, regulatory agency required proofs of adequacy for DMSW's functions and performance based on developed documents and test results. Especially, the work that explains adequacy for previously developed DMSW based on existing documents and operational records is called RCSV. When modifying RCSV conducted DMSW, it was difficult to secure consistency between developed documents and test results for modified DMSW parts and existing documents and operational records for non-modified DMSW parts. This made conducting RCSV difficult. In this paper, we proposed (a) definition of documents architecture, (b) definition of descriptive items and levels in the documents, (c) management of design information using database, (d) exhaustive testing, and (e) integrated RCSV procedure. As a result, we could conduct adequate RCSV securing consistency.

  14. A comparative study of progressive versus successive spectrophotometric resolution techniques applied for pharmaceutical ternary mixtures

    NASA Astrophysics Data System (ADS)

    Saleh, Sarah S.; Lotfy, Hayam M.; Hassan, Nagiba Y.; Salem, Hesham

    2014-11-01

    This work represents a comparative study of a novel progressive spectrophotometric resolution technique namely, amplitude center method (ACM), versus the well-established successive spectrophotometric resolution techniques namely; successive derivative subtraction (SDS); successive derivative of ratio spectra (SDR) and mean centering of ratio spectra (MCR). All the proposed spectrophotometric techniques consist of several consecutive steps utilizing ratio and/or derivative spectra. The novel amplitude center method (ACM) can be used for the determination of ternary mixtures using single divisor where the concentrations of the components are determined through progressive manipulation performed on the same ratio spectrum. Those methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the official BP methods, showing no significant difference with respect to accuracy and precision.

  15. A vision-based method for planar position measurement

    NASA Astrophysics Data System (ADS)

    Chen, Zong-Hao; Huang, Peisen S.

    2016-12-01

    In this paper, a vision-based method is proposed for three-degree-of-freedom (3-DOF) planar position (XY{θZ} ) measurement. This method uses a single camera to capture the image of a 2D periodic pattern and then uses the 2D discrete Fourier transform (2D DFT) method to estimate the phase of its fundamental frequency component for position measurement. To improve position measurement accuracy, the phase estimation error of 2D DFT is analyzed and a phase estimation method is proposed. Different simulations are done to verify the feasibility of this method and study the factors that influence the accuracy and precision of phase estimation. To demonstrate the performance of the proposed method for position measurement, a prototype encoder consisting of a black-and-white industrial camera with VGA resolution (480  ×  640 pixels) and an iPhone 4s has been developed. Experimental results show the peak-to-peak resolutions to be 3.5 nm in X axis, 8 nm in Y axis and 4 μ \\text{rad} in {θZ} axis. The corresponding RMS resolutions are 0.52 nm, 1.06 nm, and 0.60 μ \\text{rad} respectively.

  16. Associative memory for online learning in noisy environments using self-organizing incremental neural network.

    PubMed

    Sudo, Akihito; Sato, Akihiro; Hasegawa, Osamu

    2009-06-01

    Associative memory operating in a real environment must perform well in online incremental learning and be robust to noisy data because noisy associative patterns are presented sequentially in a real environment. We propose a novel associative memory that satisfies these requirements. Using the proposed method, new associative pairs that are presented sequentially can be learned accurately without forgetting previously learned patterns. The memory size of the proposed method increases adaptively with learning patterns. Therefore, it suffers neither redundancy nor insufficiency of memory size, even in an environment in which the maximum number of associative pairs to be presented is unknown before learning. Noisy inputs in real environments are classifiable into two types: noise-added original patterns and faultily presented random patterns. The proposed method deals with two types of noise. To our knowledge, no conventional associative memory addresses noise of both types. The proposed associative memory performs as a bidirectional one-to-many or many-to-one associative memory and deals not only with bipolar data, but also with real-valued data. Results demonstrate that the proposed method's features are important for application to an intelligent robot operating in a real environment. The originality of our work consists of two points: employing a growing self-organizing network for an associative memory, and discussing what features are necessary for an associative memory for an intelligent robot and proposing an associative memory that satisfies those requirements.

  17. Iterative methods used in overlap astrometric reduction techniques do not always converge

    NASA Astrophysics Data System (ADS)

    Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.

    1993-04-01

    In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.

  18. PI Passivity-Based Control for Maximum Power Extraction of a Wind Energy System with Guaranteed Stability Properties

    NASA Astrophysics Data System (ADS)

    Cisneros, Rafael; Gao, Rui; Ortega, Romeo; Husain, Iqbal

    2016-10-01

    The present paper proposes a maximum power extraction control for a wind system consisting of a turbine, a permanent magnet synchronous generator, a rectifier, a load and one constant voltage source, which is used to form the DC bus. We propose a linear PI controller, based on passivity, whose stability is guaranteed under practically reasonable assumptions. PI structures are widely accepted in practice as they are easier to tune and simpler than other existing model-based methods. Real switching based simulations have been performed to assess the performance of the proposed controller.

  19. Photonic crystal based 1-bit full-adder optical circuit by using ring resonators in a nonlinear structure

    NASA Astrophysics Data System (ADS)

    Alipour-Banaei, Hamed; Seif-Dargahi, Hamed

    2017-05-01

    In this paper we proposed a novel design for realizing all optical 1*bit full-adder based on photonic crystals. The proposed structure was realized by cascading two optical 1-bit half-adders. The final structure is consisted of eight optical waveguides and two nonlinear resonant rings, created inside rod type two dimensional photonic crystal with square lattice. The structure has ;X;, ;Y; and ;Z; as input and ;SUM; and ;CARRY; as output ports. The performance and functionality of the proposed structure was validated by means of finite difference time domain method.

  20. 3-D ultrasound volume reconstruction using the direct frame interpolation method.

    PubMed

    Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin

    2010-11-01

    A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.

  1. Structural Equation Models in a Redundancy Analysis Framework With Covariates.

    PubMed

    Lovaglio, Pietro Giorgio; Vittadini, Giorgio

    2014-01-01

    A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.

  2. A thermodynamically consistent discontinuous Galerkin formulation for interface separation

    DOE PAGES

    Versino, Daniele; Mourad, Hashem M.; Dávila, Carlos G.; ...

    2015-07-31

    Our paper describes the formulation of an interface damage model, based on the discontinuous Galerkin (DG) method, for the simulation of failure and crack propagation in laminated structures. The DG formulation avoids common difficulties associated with cohesive elements. Specifically, it does not introduce any artificial interfacial compliance and, in explicit dynamic analysis, it leads to a stable time increment size which is unaffected by the presence of stiff massless interfaces. This proposed method is implemented in a finite element setting. Convergence and accuracy are demonstrated in Mode I and mixed-mode delamination in both static and dynamic analyses. Significantly, numerical resultsmore » obtained using the proposed interface model are found to be independent of the value of the penalty factor that characterizes the DG formulation. By contrast, numerical results obtained using a classical cohesive method are found to be dependent on the cohesive penalty stiffnesses. The proposed approach is shown to yield more accurate predictions pertaining to crack propagation under mixed-mode fracture because of the advantage. Furthermore, in explicit dynamic analysis, the stable time increment size calculated with the proposed method is found to be an order of magnitude larger than the maximum allowable value for classical cohesive elements.« less

  3. Automatic Detection of Galaxy Type From Datasets of Galaxies Image Based on Image Retrieval Approach.

    PubMed

    Abd El Aziz, Mohamed; Selim, I M; Xiong, Shengwu

    2017-06-30

    This paper presents a new approach for the automatic detection of galaxy morphology from datasets based on an image-retrieval approach. Currently, there are several classification methods proposed to detect galaxy types within an image. However, in some situations, the aim is not only to determine the type of galaxy within the queried image, but also to determine the most similar images for query image. Therefore, this paper proposes an image-retrieval method to detect the type of galaxies within an image and return with the most similar image. The proposed method consists of two stages, in the first stage, a set of features is extracted based on shape, color and texture descriptors, then a binary sine cosine algorithm selects the most relevant features. In the second stage, the similarity between the features of the queried galaxy image and the features of other galaxy images is computed. Our experiments were performed using the EFIGI catalogue, which contains about 5000 galaxies images with different types (edge-on spiral, spiral, elliptical and irregular). We demonstrate that our proposed approach has better performance compared with the particle swarm optimization (PSO) and genetic algorithm (GA) methods.

  4. Adaptive Granulation-Based Prediction for Energy System of Steel Industry.

    PubMed

    Wang, Tianyu; Han, Zhongyang; Zhao, Jun; Wang, Wei

    2018-01-01

    The flow variation tendency of byproduct gas plays a crucial role for energy scheduling in steel industry. An accurate prediction of its future trends will be significantly beneficial for the economic profits of steel enterprise. In this paper, a long-term prediction model for the energy system is proposed by providing an adaptive granulation-based method that considers the production semantics involved in the fluctuation tendency of the energy data, and partitions them into a series of information granules. To fully reflect the corresponding data characteristics of the formed unequal-length temporal granules, a 3-D feature space consisting of the timespan, the amplitude and the linetype is designed as linguistic descriptors. In particular, a collaborative-conditional fuzzy clustering method is proposed to granularize the tendency-based feature descriptors and specifically measure the amplitude variation of industrial data which plays a dominant role in the feature space. To quantify the performance of the proposed method, a series of real-world industrial data coming from the energy data center of a steel plant is employed to conduct the comparative experiments. The experimental results demonstrate that the proposed method successively satisfies the requirements of the practically viable prediction.

  5. Measurement of vibration using phase only correlation technique

    NASA Astrophysics Data System (ADS)

    Balachandar, S.; Vipin, K.

    2017-08-01

    A novel method for the measurement of vibration is proposed and demonstrated. The proposed experiment is based on laser triangulation: consists of line laser, object under test and a high speed camera remotely controlled by a software. Experiment involves launching a line-laser probe beam perpendicular to the axis of the vibrating object. The reflected probe beam is recorded by a high speed camera. The dynamic position of the line laser in camera plane is governed by the magnitude and frequency of the vibrating test-object. Using phase correlation technique the maximum distance travelled by the probe beam in CCD plane is measured in terms of pixels using MATLAB. An actual displacement of the object in mm is measured by calibration. Using displacement data with time, other vibration associated quantities such as acceleration, velocity and frequency are evaluated. The preliminary result of the proposed method is reported for acceleration from 1g to 3g, and from frequency 6Hz to 26Hz. The results are closely matching with its theoretical values. The advantage of the proposed method is that it is a non-destructive method and using phase correlation algorithm subpixel displacement in CCD plane can be measured with high accuracy.

  6. Recognition of building group patterns in topographic maps based on graph partitioning and random forest

    NASA Astrophysics Data System (ADS)

    He, Xianjin; Zhang, Xinchang; Xin, Qinchuan

    2018-02-01

    Recognition of building group patterns (i.e., the arrangement and form exhibited by a collection of buildings at a given mapping scale) is important to the understanding and modeling of geographic space and is hence essential to a wide range of downstream applications such as map generalization. Most of the existing methods develop rigid rules based on the topographic relationships between building pairs to identify building group patterns and thus their applications are often limited. This study proposes a method to identify a variety of building group patterns that allow for map generalization. The method first identifies building group patterns from potential building clusters based on a machine-learning algorithm and further partitions the building clusters with no recognized patterns based on the graph partitioning method. The proposed method is applied to the datasets of three cities that are representative of the complex urban environment in Southern China. Assessment of the results based on the reference data suggests that the proposed method is able to recognize both regular (e.g., the collinear, curvilinear, and rectangular patterns) and irregular (e.g., the L-shaped, H-shaped, and high-density patterns) building group patterns well, given that the correctness values are consistently nearly 90% and the completeness values are all above 91% for three study areas. The proposed method shows promises in automated recognition of building group patterns that allows for map generalization.

  7. Medical image segmentation by combining graph cuts and oriented active appearance models.

    PubMed

    Chen, Xinjian; Udupa, Jayaram K; Bagci, Ulas; Zhuge, Ying; Yao, Jianhua

    2012-04-01

    In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/.

  8. Semi-automatic segmentation of brain tumors using population and individual information.

    PubMed

    Wu, Yao; Yang, Wei; Jiang, Jun; Li, Shuanqian; Feng, Qianjin; Chen, Wufan

    2013-08-01

    Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.

  9. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    PubMed

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  10. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  11. New agreement measures based on survival processes

    PubMed Central

    Guo, Ying; Li, Ruosha; Peng, Limin; Manatunga, Amita K.

    2013-01-01

    Summary The need to assess agreement arises in many scenarios in biomedical sciences when measurements were taken by different methods on the same subjects. When the endpoints are survival outcomes, the study of agreement becomes more challenging given the special characteristics of time-to-event data. In this paper, we propose a new framework for assessing agreement based on survival processes that can be viewed as a natural representation of time-to-event outcomes. Our new agreement measure is formulated as the chance-corrected concordance between survival processes. It provides a new perspective for studying the relationship between correlated survival outcomes and offers an appealing interpretation as the agreement between survival times on the absolute distance scale. We provide a multivariate extension of the proposed agreement measure for multiple methods. Furthermore, the new framework enables a natural extension to evaluate time-dependent agreement structure. We develop nonparametric estimation of the proposed new agreement measures. Our estimators are shown to be strongly consistent and asymptotically normal. We evaluate the performance of the proposed estimators through simulation studies and then illustrate the methods using a prostate cancer data example. PMID:23844617

  12. Quantitative ultrasound molecular imaging by modeling the binding kinetics of targeted contrast agent

    NASA Astrophysics Data System (ADS)

    Turco, Simona; Tardy, Isabelle; Frinking, Peter; Wijkstra, Hessel; Mischi, Massimo

    2017-03-01

    Ultrasound molecular imaging (USMI) is an emerging technique to monitor diseases at the molecular level by the use of novel targeted ultrasound contrast agents (tUCA). These consist of microbubbles functionalized with targeting ligands with high-affinity for molecular markers of specific disease processes, such as cancer-related angiogenesis. Among the molecular markers of angiogenesis, the vascular endothelial growth factor receptor 2 (VEGFR2) is recognized to play a major role. In response, the clinical-grade tUCA BR55 was recently developed, consisting of VEGFR2-targeting microbubbles which can flow through the entire circulation and accumulate where VEGFR2 is over-expressed, thus causing selective enhancement in areas of active angiogenesis. Discrimination between bound and free microbubbles is crucial to assess cancer angiogenesis. Currently, this is done non-quantitatively by looking at the late enhancement, about 10 min after injection, or by calculation of the differential targeted enhancement, requiring the application of a high-pressure ultrasound (US) burst to destroy all the microbubbles in the acoustic field and isolate the signal coming only from bound microbubbles. In this work, we propose a novel method based on mathematical modeling of the binding kinetics during the tUCA first pass, thus reducing the acquisition time and with no need for a destructive US burst. Fitting time-intensity curves measured with USMI by the proposed model enables the assessment of cancer angiogenesis at both the vascular and molecular levels. This is achieved by estimation of quantitative parameters related to the microvascular architecture and microbubble binding. The proposed method was tested in 11 prostate-tumor bearing rats by performing USMI after injection of BR55, and showed good agreement with current USMI methods. The novel information provided by the proposed method, possibly combined with the current non-quantitative methods, may bring deeper insight into cancer angiogenesis, and thus potentially improve cancer diagnosis and management.

  13. Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario

    NASA Astrophysics Data System (ADS)

    Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.

    1997-06-01

    In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.

  14. An Improved Image Ringing Evaluation Method with Weighted Sum of Gray Extreme Value

    NASA Astrophysics Data System (ADS)

    Yang, Ling; Meng, Yanhua; Wang, Bo; Bai, Xu

    2018-03-01

    Blind image restoration algorithm usually produces ringing more obvious at the edges. Ringing phenomenon is mainly affected by noise, species of restoration algorithm, and the impact of the blur kernel estimation during restoration. Based on the physical mechanism of ringing, a method of evaluating the ringing on blind restoration images is proposed. The method extracts the ringing image overshooting and ripple region to make the weighted statistics for the regional gradient value. According to the weights set by multiple experiments, the edge information is used to characterize the details of the edge to determine the weight, quantify the seriousness of the ring effect, and propose the evaluation method of the ringing caused by blind restoration. The experimental results show that the method can effectively evaluate the ring effect in the restoration images under different restoration algorithms and different restoration parameters. The evaluation results are consistent with the visual evaluation results.

  15. Traffic Sign Detection Based on Biologically Visual Mechanism

    NASA Astrophysics Data System (ADS)

    Hu, X.; Zhu, X.; Li, D.

    2012-07-01

    TSR (Traffic sign recognition) is an important problem in ITS (intelligent traffic system), which is being paid more and more attention for realizing drivers assisting system and unmanned vehicle etc. TSR consists of two steps: detection and recognition, and this paper describe a new traffic sign detection method. The design principle of the traffic sign is comply with the visual attention mechanism of human, so we propose a method using visual attention mechanism to detect traffic sign ,which is reasonable. In our method, the whole scene will firstly be analyzed by visual attention model to acquire the area where traffic signs might be placed. And then, these candidate areas will be analyzed according to the shape characteristics of the traffic sign to detect traffic signs. In traffic sign detection experiments, the result shows the proposed method is effectively and robust than other existing saliency detection method.

  16. Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.

    PubMed

    Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik

    2011-01-01

    Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.

  17. Variational method for calculating the binding energy of the base state of an impurity D- centered on a quantum dot of GaAs-Ga1-xAlxAs

    NASA Astrophysics Data System (ADS)

    Durán-Flórez, F.; Caicedo, L. C.; Gonzalez, J. E.

    2018-04-01

    In quantum mechanics it is very difficult to obtain exact solutions, therefore, it is necessary to resort to tools and methods that facilitate the calculations of the solutions of these systems, one of these methods is the variational method that consists in proposing a wave function that depend on several parameters that are adjusted to get close to the exact solution. Authors in the past have performed calculations applying this method using exponential and Gaussian orbital functions with linear and quadratic correlation factors. In this paper, a Gaussian function with a linear correlation factor is proposed, for the calculation of the binding energy of an impurity D ‑ centered on a quantum dot of radius r, the Gaussian function is dependent on the radius of the quantum dot.

  18. Theoretical analysis of exponential transversal method of lines for the diffusion equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salazar, A.; Raydan, M.; Campo, A.

    1996-12-31

    Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less

  19. Recognition of Roasted Coffee Bean Levels using Image Processing and Neural Network

    NASA Astrophysics Data System (ADS)

    Nasution, T. H.; Andayani, U.

    2017-03-01

    The coffee beans roast levels have some characteristics. However, some people cannot recognize the coffee beans roast level. In this research, we propose to design a method to recognize the coffee beans roast level of images digital by processing the image and classifying with backpropagation neural network. The steps consist of how to collect the images data with image acquisition, pre-processing, feature extraction using Gray Level Co-occurrence Matrix (GLCM) method and finally normalization of data extraction using decimal scaling features. The values of decimal scaling features become an input of classifying in backpropagation neural network. We use the method of backpropagation to recognize the coffee beans roast levels. The results showed that the proposed method is able to identify the coffee roasts beans level with an accuracy of 97.5%.

  20. All-dielectric perforated metamaterials with toroidal dipolar response (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Stenishchev, Ivan; Basharin, Alexey A.

    2017-05-01

    We present metamaterials based on dielectric slab with perforated identical cylindrical clusters with perforated holes, which allow to support the toroidal dipolar response due to Mie-resonances in each hole. Note that proposed metamaterial is technologically simple for fabrication in optical frequency range. Metamaterial can be fabricated by several methods. For instance, we may apply the molecular beam epitaxy method for deposition of Si or GaAs layers, which have permittivity close to 16. Next step, nanometer/micrometer holes are perforated by focused ion beam method or laser cutting method. Fundamental difference of proposed metamaterial is technological fabrication process. Classically all- dielectric optical metamaterials consist of nano-spheres or nano-discs, which are complicated for fabrication, while our idea and suggested metamaterials are promising prototype of various optical/THz all-dielectic devices as sensor, nano-antennas elements for nanophotonics.

  1. Improving the complementary methods to estimate evapotranspiration under diverse climatic and physical conditions

    NASA Astrophysics Data System (ADS)

    Anayah, F. M.; Kaluarachchi, J. J.

    2014-06-01

    Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including complementary relationship areal evapotranspiration (CRAE), advection aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal method using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model-based alternative. The proposed model produced a single-step ET formulation with results equal to or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 (coefficient of determination) across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.

  2. Method for Analysis of Dyadic Communication in Novels.

    ERIC Educational Resources Information Center

    DeHart, Florence E.

    A systematic approach for analysis of dyadic communication in literary works is proposed which is based on a work by Watzlawick, Beavin, and Jackson. This interdisciplinary methodology using behavioral science approaches to analyze literature consists primarily in studying relationship aspects of dyadic communication, as differentiated from…

  3. Medical Image Segmentation by Combining Graph Cut and Oriented Active Appearance Models

    PubMed Central

    Chen, Xinjian; Udupa, Jayaram K.; Bağcı, Ulaş; Zhuge, Ying; Yao, Jianhua

    2017-01-01

    In this paper, we propose a novel 3D segmentation method based on the effective combination of the active appearance model (AAM), live wire (LW), and graph cut (GC). The proposed method consists of three main parts: model building, initialization, and segmentation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the initialization part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW method, resulting in Oriented AAM (OAAM). A multi-object strategy is utilized to help in object initialization. We employ a pseudo-3D initialization strategy, and segment the organs slice by slice via multi-object OAAM method. For the segmentation part, a 3D shape constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT dataset and also tested on the MICCAI 2007 grand challenge for liver segmentation training dataset. The results show the following: (a) An overall segmentation accuracy of true positive volume fraction (TPVF) > 94.3%, false positive volume fraction (FPVF) < 0.2% can be achieved. (b) The initialization performance can be improved by combining AAM and LW. (c) The multi-object strategy greatly facilitates the initialization. (d) Compared to the traditional 3D AAM method, the pseudo 3D OAAM method achieves comparable performance while running 12 times faster. (e) The performance of proposed method is comparable to the state of the art liver segmentation algorithm. The executable version of 3D shape constrained GC with user interface can be downloaded from website http://xinjianchen.wordpress.com/research/. PMID:22311862

  4. Finding consistent patterns: A nonparametric approach for identifying differential expression in RNA-Seq data

    PubMed Central

    Li, Jun; Tibshirani, Robert

    2015-01-01

    We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or ‘sequencing depths’. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by ‘outliers’ in the data. We introduce a simple, nonparametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods. PMID:22127579

  5. Efficient 3D multi-region prostate MRI segmentation using dual optimization.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    Efficient and accurate extraction of the prostate, in particular its clinically meaningful sub-regions from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, we propose a novel multi-region segmentation approach to simultaneously locating the boundaries of the prostate and its two major sub-regions: the central gland and the peripheral zone. The proposed method utilizes the prior knowledge of the spatial region consistency and employs a customized prostate appearance model to simultaneously segment multiple clinically meaningful regions. We solve the resulted challenging combinatorial optimization problem by means of convex relaxation, for which we introduce a novel spatially continuous flow-maximization model and demonstrate its duality to the investigated convex relaxed optimization problem with the region consistency constraint. Moreover, the proposed continuous max-flow model naturally leads to a new and efficient continuous max-flow based algorithm, which enjoys great advantages in numerics and can be readily implemented on GPUs. Experiments using 15 T2-weighted 3D prostate MR images, by inter- and intra-operator variability, demonstrate the promising performance of the proposed approach.

  6. Filter Bank Regularized Common Spatial Pattern Ensemble for Small Sample Motor Imagery Classification.

    PubMed

    Park, Sang-Hoon; Lee, David; Lee, Sang-Goog

    2018-02-01

    For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.

  7. First-pass myocardial perfusion MRI with reduced subendocardial dark-rim artifact using optimized Cartesian sampling.

    PubMed

    Zhou, Zhengwei; Bi, Xiaoming; Wei, Janet; Yang, Hsin-Jung; Dharmakumar, Rohan; Arsanjani, Reza; Bairey Merz, C Noel; Li, Debiao; Sharif, Behzad

    2017-02-01

    The presence of subendocardial dark-rim artifact (DRA) remains an ongoing challenge in first-pass perfusion (FPP) cardiac magnetic resonance imaging (MRI). We propose a free-breathing FPP imaging scheme with Cartesian sampling that is optimized to minimize the DRA and readily enables near-instantaneous image reconstruction. The proposed FPP method suppresses Gibbs ringing effects-a major underlying factor for the DRA-by "shaping" the underlying point spread function through a two-step process: 1) an undersampled Cartesian sampling scheme that widens the k-space coverage compared to the conventional scheme; and 2) a modified parallel-imaging scheme that incorporates optimized apodization (k-space data filtering) to suppress Gibbs-ringing effects. Healthy volunteer studies (n = 10) were performed to compare the proposed method against the conventional Cartesian technique-both using a saturation-recovery gradient-echo sequence at 3T. Furthermore, FPP imaging studies using the proposed method were performed in infarcted canines (n = 3), and in two symptomatic patients with suspected coronary microvascular dysfunction for assessment of myocardial hypoperfusion. Width of the DRA and the number of DRA-affected myocardial segments were significantly reduced in the proposed method compared to the conventional approach (width: 1.3 vs. 2.9 mm, P < 0.001; number of segments: 2.6 vs. 8.7; P < 0.0001). The number of slices with severe DRA was markedly lower for the proposed method (by 10-fold). The reader-assigned image quality scores were similar (P = 0.2), although the quantified myocardial signal-to-noise ratio was lower for the proposed method (P < 0.05). Animal studies showed that the proposed method can detect subendocardial perfusion defects and patient results were consistent with the gold-standard invasive test. The proposed free-breathing Cartesian FPP imaging method significantly reduces the prevalence of severe DRAs compared to the conventional approach while maintaining similar resolution and image quality. 2 J. Magn. Reson. Imaging 2017;45:542-555. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †.

    PubMed

    Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek

    2017-08-24

    This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.

  9. Quantile Regression for Recurrent Gap Time Data

    PubMed Central

    Luo, Xianghua; Huang, Chiung-Yu; Wang, Lan

    2014-01-01

    Summary Evaluating covariate effects on gap times between successive recurrent events is of interest in many medical and public health studies. While most existing methods for recurrent gap time analysis focus on modeling the hazard function of gap times, a direct interpretation of the covariate effects on the gap times is not available through these methods. In this article, we consider quantile regression that can provide direct assessment of covariate effects on the quantiles of the gap time distribution. Following the spirit of the weighted risk-set method by Luo and Huang (2011, Statistics in Medicine 30, 301–311), we extend the martingale-based estimating equation method considered by Peng and Huang (2008, Journal of the American Statistical Association 103, 637–649) for univariate survival data to analyze recurrent gap time data. The proposed estimation procedure can be easily implemented in existing software for univariate censored quantile regression. Uniform consistency and weak convergence of the proposed estimators are established. Monte Carlo studies demonstrate the effectiveness of the proposed method. An application to data from the Danish Psychiatric Central Register is presented to illustrate the methods developed in this article. PMID:23489055

  10. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E

    2018-03-01

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.

  11. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  12. Bayesian Peptide Peak Detection for High Resolution TOF Mass Spectrometry.

    PubMed

    Zhang, Jianqiu; Zhou, Xiaobo; Wang, Honghui; Suffredini, Anthony; Zhang, Lin; Huang, Yufei; Wong, Stephen

    2010-11-01

    In this paper, we address the issue of peptide ion peak detection for high resolution time-of-flight (TOF) mass spectrometry (MS) data. A novel Bayesian peptide ion peak detection method is proposed for TOF data with resolution of 10 000-15 000 full width at half-maximum (FWHW). MS spectra exhibit distinct characteristics at this resolution, which are captured in a novel parametric model. Based on the proposed parametric model, a Bayesian peak detection algorithm based on Markov chain Monte Carlo (MCMC) sampling is developed. The proposed algorithm is tested on both simulated and real datasets. The results show a significant improvement in detection performance over a commonly employed method. The results also agree with expert's visual inspection. Moreover, better detection consistency is achieved across MS datasets from patients with identical pathological condition.

  13. Bayesian Peptide Peak Detection for High Resolution TOF Mass Spectrometry

    PubMed Central

    Zhang, Jianqiu; Zhou, Xiaobo; Wang, Honghui; Suffredini, Anthony; Zhang, Lin; Huang, Yufei; Wong, Stephen

    2011-01-01

    In this paper, we address the issue of peptide ion peak detection for high resolution time-of-flight (TOF) mass spectrometry (MS) data. A novel Bayesian peptide ion peak detection method is proposed for TOF data with resolution of 10 000–15 000 full width at half-maximum (FWHW). MS spectra exhibit distinct characteristics at this resolution, which are captured in a novel parametric model. Based on the proposed parametric model, a Bayesian peak detection algorithm based on Markov chain Monte Carlo (MCMC) sampling is developed. The proposed algorithm is tested on both simulated and real datasets. The results show a significant improvement in detection performance over a commonly employed method. The results also agree with expert’s visual inspection. Moreover, better detection consistency is achieved across MS datasets from patients with identical pathological condition. PMID:21544266

  14. The friction cost method: a comment.

    PubMed

    Johannesson, M; Karlsson, G

    1997-04-01

    The friction cost method has been proposed as an alternative to the human-capital approach of estimating indirect costs. We argue that the friction cost method is based on implausible assumptions not supported by neoclassical economic theory. Furthermore consistently applying the friction cost method would mean that the method should also be applied in the estimation of direct costs, which would mean that the costs of health care programmes are substantially decreased. It is concluded that the friction cost method does not seem to be a useful alternative to the human-capital approach in the estimation of indirect costs.

  15. Towards a rational theory for CFD global stability

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Iannelli, G. S.

    1989-01-01

    The fundamental notion of the consistent stability of semidiscrete analogues of evolution PDEs is explored. Lyapunov's direct method is used to develop CFD semidiscrete algorithms which yield the TVD constraint as a special case. A general formula for supplying dissipation parameters for arbitrary multidimensional conservation law systems is proposed. The reliability of the method is demonstrated by the results of two numerical tests for representative Euler shocked flows.

  16. Decentralized adaptive control

    NASA Technical Reports Server (NTRS)

    Oh, B. J.; Jamshidi, M.; Seraji, H.

    1988-01-01

    A decentralized adaptive control is proposed to stabilize and track the nonlinear, interconnected subsystems with unknown parameters. The adaptation of the controller gain is derived by using model reference adaptive control theory based on Lyapunov's direct method. The adaptive gains consist of sigma, proportional, and integral combination of the measured and reference values of the corresponding subsystem. The proposed control is applied to the joint control of a two-link robot manipulator, and the performance in computer simulation corresponds with what is expected in theoretical development.

  17. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  18. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adigun, Babatunde John; Fensin, Michael Lorne; Galloway, Jack D.

    Our burnup study examined the effect of a predicted critical control rod position on the nuclide predictability of several axial and radial locations within a 4×4 graphite moderated gas cooled reactor fuel cluster geometry. To achieve this, a control rod position estimator (CRPE) tool was developed within the framework of the linkage code Monteburns between the transport code MCNP and depletion code CINDER90, and four methodologies were proposed within the tool for maintaining criticality. Two of the proposed methods used an inverse multiplication approach - where the amount of fissile material in a set configuration is slowly altered until criticalitymore » is attained - in estimating the critical control rod position. Another method carried out several MCNP criticality calculations at different control rod positions, then used a linear fit to estimate the critical rod position. The final method used a second-order polynomial fit of several MCNP criticality calculations at different control rod positions to guess the critical rod position. The results showed that consistency in prediction of power densities as well as uranium and plutonium isotopics was mutual among methods within the CRPE tool that predicted critical position consistently well. Finall, while the CRPE tool is currently limited to manipulating a single control rod, future work could be geared toward implementing additional criticality search methodologies along with additional features.« less

  1. a Weighted Closed-Form Solution for Rgb-D Data Registration

    NASA Astrophysics Data System (ADS)

    Vestena, K. M.; Dos Santos, D. R.; Oilveira, E. M., Jr.; Pavan, N. L.; Khoshelham, K.

    2016-06-01

    Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.

  2. Issues associated with Galilean invariance on a moving solid boundary in the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping

    2017-01-01

    In lattice Boltzmann simulations involving moving solid boundaries, the momentum exchange between the solid and fluid phases was recently found to be not fully consistent with the principle of local Galilean invariance (GI) when the bounce-back schemes (BBS) and the momentum exchange method (MEM) are used. In the past, this inconsistency was resolved by introducing modified MEM schemes so that the overall moving-boundary algorithm could be more consistent with GI. However, in this paper we argue that the true origin of this violation of Galilean invariance (VGI) in the presence of a moving solid-fluid interface is due to the BBS itself, as the VGI error not only exists in the hydrodynamic force acting on the solid phase, but also in the boundary force exerted on the fluid phase, according to Newton's Third Law. The latter, however, has so far gone unnoticed in previously proposed modified MEM schemes. Based on this argument, we conclude that the previous modifications to the momentum exchange method are incomplete solutions to the VGI error in the lattice Boltzmann method (LBM). An implicit remedy to the VGI error in the LBM and its limitation is then revealed. To address the VGI error for a case when this implicit remedy does not exist, a bounce-back scheme based on coordinate transformation is proposed. Numerical tests in both laminar and turbulent flows show that the proposed scheme can effectively eliminate the errors associated with the usual bounce-back implementations on a no-slip solid boundary, and it can maintain an accurate momentum exchange calculation with minimal computational overhead.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Usanov, D. A., E-mail: UsanovDA@info.sgu.ru; Nikitov, S. A.; Skripal, A. V.

    A method is proposed for the measurement of the electrophysical characteristics of semiconductor structures: the electrical conductivity of the n layer, which plays the role of substrate for a semiconductor structure, and the thickness and electrical conductivity of the strongly doped epitaxial n{sup +} layer. The method is based on the use of a one-dimensional microwave photonic crystal with a violation of periodicity containing the semiconductor structure under investigation. The characteristics of epitaxial gallium-arsenide structures consisting of an epitaxial layer and the semi-insulating substrate measured by this method are presented.

  4. [A computer tomography assisted method for the automatic detection of region of interest in dynamic kidney images].

    PubMed

    Jing, Xueping; Zheng, Xiujuan; Song, Shaoli; Liu, Kai

    2017-12-01

    Glomerular filtration rate (GFR), which can be estimated by Gates method with dynamic kidney single photon emission computed tomography (SPECT) imaging, is a key indicator of renal function. In this paper, an automatic computer tomography (CT)-assisted detection method of kidney region of interest (ROI) is proposed to achieve the objective and accurate GFR calculation. In this method, the CT coronal projection image and the enhanced SPECT synthetic image are firstly generated and registered together. Then, the kidney ROIs are delineated using a modified level set algorithm. Meanwhile, the background ROIs are also obtained based on the kidney ROIs. Finally, the value of GFR is calculated via Gates method. Comparing with the clinical data, the GFR values estimated by the proposed method were consistent with the clinical reports. This automatic method can improve the accuracy and stability of kidney ROI detection for GFR calculation, especially when the kidney function has been severely damaged.

  5. Vortex mass in a superfluid

    NASA Astrophysics Data System (ADS)

    Simula, Tapio

    2018-02-01

    We consider the inertial mass of a vortex in a superfluid. We obtain a vortex mass that is well defined and is determined microscopically and self-consistently by the elementary excitation energy of the kelvon quasiparticle localized within the vortex core. The obtained result for the vortex mass is found to be consistent with experimental observations on superfluid quantum gases and vortex rings in water. We propose a method to measure the inertial rest mass and Berry phase of a vortex in superfluid Bose and Fermi gases.

  6. Enhancement of flow measurements using fluid-dynamic constraints

    NASA Astrophysics Data System (ADS)

    Egger, H.; Seitz, T.; Tropea, C.

    2017-09-01

    Novel experimental modalities acquire spatially resolved velocity measurements for steady state and transient flows which are of interest for engineering and biological applications. One of the drawbacks of such high resolution velocity data is their susceptibility to measurement errors. In this paper, we propose a novel filtering strategy that allows enhancement of the noisy measurements to obtain reconstruction of smooth divergence free velocity and corresponding pressure fields which together approximately comply to a prescribed flow model. The main step in our approach consists of the appropriate use of the velocity measurements in the design of a linearized flow model which can be shown to be well-posed and consistent with the true velocity and pressure fields up to measurement and modeling errors. The reconstruction procedure is then formulated as an optimal control problem for this linearized flow model. The resulting filter has analyzable smoothing and approximation properties. We briefly discuss the discretization of the approach by finite element methods and comment on the efficient solution by iterative methods. The capability of the proposed filter to significantly reduce data noise is demonstrated by numerical tests including the application to experimental data. In addition, we compare with other methods like smoothing and solenoidal filtering.

  7. An analytical fuzzy-based approach to ?-gain optimal control of input-affine nonlinear systems using Newton-type algorithm

    NASA Astrophysics Data System (ADS)

    Milic, Vladimir; Kasac, Josip; Novakovic, Branko

    2015-10-01

    This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.

  8. Reduced rank regression via adaptive nuclear norm penalization

    PubMed Central

    Chen, Kun; Dong, Hongbo; Chan, Kung-Sik

    2014-01-01

    Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172

  9. Reconstructed phase spaces of intrinsic mode functions. Application to postural stability analysis.

    PubMed

    Snoussi, Hichem; Amoud, Hassan; Doussot, Michel; Hewson, David; Duchêne, Jacques

    2006-01-01

    In this contribution, we propose an efficient nonlinear analysis method characterizing postural steadiness. The analyzed signal is the displacement of the centre of pressure (COP) collected from a force plate used for measuring postural sway. The proposed method consists of analyzing the nonlinear dynamics of the intrinsic mode functions (IMF) of the COP signal. The nonlinear properties are assessed through the reconstructed phase spaces of the different IMFs. This study shows some specific geometries of the attractors of some intrinsic modes. Moreover, the volume spanned by the geometric attractors in the reconstructed phase space represents an efficient indicator of the postural stability of the subject. Experiments results corroborate the effectiveness of the method to blindly discriminate young subjects, elderly subjects and subjects presenting a risk of falling.

  10. A novel method for repeatedly generating speckle patterns used in digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Juan; Sweedy, Ahmed; Gitzhofer, François; Baroud, Gamal

    2018-01-01

    Speckle patterns play a key role in Digital Image Correlation (DIC) measurement, and generating an optimal speckle pattern has been the goal for decades now. The usual method of generating a speckle pattern is by manually spraying the paint on the specimen. However, this makes it difficult to reproduce the optimal pattern for maintaining identical testing conditions and achieving consistent DIC results. This study proposed and evaluated a novel method using an atomization system to repeatedly generate speckle patterns. To verify the repeatability of the speckle patterns generated by this system, simulation and experimental studies were systematically performed. The results from both studies showed that the speckle patterns and, accordingly, the DIC measurements become highly accurate and repeatable using the proposed atomization system.

  11. Predicting the Effective Elastic Properties of Polymer Bonded Explosives based on Micromechanical Methods

    NASA Astrophysics Data System (ADS)

    Wang, Jingcheng; Luo, Jingrun

    2018-04-01

    Due to the extremely high particle volume fraction (greater than 85%) and damage feature of polymer bonded explosives (PBXs), conventional micromechanical methods lead to inaccurate estimates on their effective elastic properties. According to their manufacture characteristics, a multistep approach based on micromechanical methods is proposed. PBXs are treated as pseudo poly-crystal materials consisting of equivalent composite particles (explosive crystals with binder coating), rather than two-phase composites composed of explosive particles and binder matrix. Moduli of composite spheres are obtained by generalized self-consistent method first, and the self-consistent method is modified to calculate the effective moduli of PBX. Defects and particle size distribution are considered by Mori-Tanaka method. Results show that when the multistep approach is applied to PBX 9501, estimates are far more accurate than the conventional micromechanical results. The bulk modulus is 5.75% higher, and shear modulus is 5.78% lower than the experimental values. Further analyses discover that while particle volume fraction and the binder's property have significant influences on the effective moduli of PBX, the moduli of particles present minor influences. Investigation of another particle size distribution indicates that the use of more fine particles will enhance the effective moduli of PBX.

  12. Efficient generation of 3D hologram for American Sign Language using look-up table

    NASA Astrophysics Data System (ADS)

    Park, Joo-Sup; Kim, Seung-Cheol; Kim, Eun-Soo

    2010-02-01

    American Sign Language (ASL) is one of the languages giving the greatest help for communication of the hearing impaired person. Current 2-D broadcasting, 2-D movies are used the ASL to give some information, help understand the situation of the scene and translate the foreign language. These ASL will not be disappeared in future three-dimensional (3-D) broadcasting or 3-D movies because the usefulness of the ASL. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic ASL in holographic 3DTV or 3-D movies using look-up table method. The proposed method is largely consisted of five steps: construction of the LUT for each ASL images, extraction of characters in scripts or situation, call the fringe patterns for characters in the LUT for each ASL, composition of hologram pattern for 3-D video and hologram pattern for ASL and reconstruct the holographic 3D video with ASL. Some simulation results confirmed the feasibility of the proposed method in efficient generation of CGH patterns for ASL.

  13. All-Systolic Non-ECG-gated Myocardial Perfusion MRI: Feasibility of Multi-Slice Continuous First-Pass Imaging

    PubMed Central

    Sharif, Behzad; Arsanjani, Reza; Dharmakumar, Rohan; Bairey Merz, C. Noel; Berman, Daniel S.; Li, Debiao

    2015-01-01

    Purpose To develop and test the feasibility of a new method for non-ECG-gated first-pass perfusion (FPP) cardiac MR capable of imaging multiple short-axis slices at the same systolic cardiac phase. Methods A magnetization-driven pulse sequence was developed for non-ECG-gated FPP imaging without saturation-recovery preparation using continuous slice-interleaved radial sampling. The image reconstruction method, dubbed TRACE, employed self-gating based on reconstruction of a real-time image-based navigator combined with reference-constrained compressed sensing. Data from ischemic animal studies (n=5) was used in a simulation framework to evaluate temporal fidelity. Healthy subjects (n=5) were studied using both the proposed and conventional method to compare the myocardial contrast-to-noise ratio (CNR). Patients (n=2) underwent adenosine stress studies using the proposed method. Results Temporal fidelity of the developed method was shown to be sufficient at high heart-rates. The healthy volunteers studies demonstrated normal perfusion and no artifacts. Compared to the conventional scheme, myocardial CNR for the proposed method was slightly higher (8.6±0.6 vs. 8.0±0.7). Patient studies showed stress-induced perfusion defects consistent with invasive angiography. Conclusions The presented methods and results demonstrate feasibility of the proposed approach for high-resolution non-ECG-gated FPP imaging and indicate its potential for achieving desirable image quality (high CNR, no dark-rim artifacts) with a 3-slice spatial coverage, all imaged at the same systolic phase. PMID:26052843

  14. Integrative Structure Determination of Protein Assemblies by Satisfaction of Spatial Restraints

    NASA Astrophysics Data System (ADS)

    Alber, Frank; Chait, Brian T.; Rout, Michael P.; Sali, Andrej

    To understand the cell, we need to determine the structures of macromolecular assemblies, many of which consist of tens to hundreds of components. A great variety of experimental data can be used to characterize the assemblies at several levels of resolution, from atomic structures to component configurations. To maximize completeness, resolution, accuracy, precision and efficiency of the structure determination, a computational approach is needed that can use spatial information from a variety of experimental methods. We propose such an approach, defined by its three main components: a hierarchical representation of the assembly, a scoring function consisting of spatial restraints derived from experimental data, and an optimization method that generates structures consistent with the data. We illustrate the approach by determining the configuration of the 456 proteins in the nuclear pore complex from Baker's yeast.

  15. On the equivalence of LIST and DIIS methods for convergence acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garza, Alejandro J.; Scuseria, Gustavo E.

    2015-04-28

    Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay’s DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods.

  16. Hierarchical Bayesian Models of Subtask Learning

    ERIC Educational Resources Information Center

    Anglim, Jeromy; Wynton, Sarah K. A.

    2015-01-01

    The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

  17. 48 CFR 215.203-70 - Requests for proposals-tiered evaluation of offers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY... shall be consistent with FAR part 19. (b) Consideration shall be given to the tiers of small businesses (e.g., 8(a), HUBZone small business, service-disabled veteran-owned small business, small business...

  18. Using Photo-Interviewing as Tool for Research and Evaluation.

    ERIC Educational Resources Information Center

    Dempsey, John V.; Tucker, Susan A.

    Arguing that photo-interviewing yields richer data than that usually obtained from verbal interviewing procedures alone, it is proposed that this method of data collection be added to "standard" methodologies in instructional development research and evaluation. The process, as described in this paper, consists of using photographs of…

  19. Structure of the Autism Symptom Phenotype: A Proposed Multidimensional Model

    ERIC Educational Resources Information Center

    Georgiades, Stelios; Szatmari, Peter; Zwaigenbaum, Lonnie; Duku, Eric; Bryson, Susan; Roberts, Wendy; Goldberg, Jeremy; Mahoney, William

    2007-01-01

    Background: The main objective of this study was to develop a comprehensive, empirical model that would allow the reorganization of the structure of the pervasive developmental disorder symptom phenotype through factor analysis into more homogeneous dimensions. Method: The sample consisted of 209 children with pervasive developmental disorder…

  20. PEOR--Engaging Students in Demonstrations

    ERIC Educational Resources Information Center

    Bonello, Charles; Scaife, Jon

    2009-01-01

    Demonstrations are a core part of science teaching. In 1980 a three-part assessment method using demonstrating was proposed. Known as DOE this consisted of demonstration, observation and explanation. DOE quickly evolved into POE: predict, observe, explain. In the light of experiences with POE and insights from constructivist theory we set out in…

  1. An adaptive block-based fusion method with LUE-SSIM for multi-focus images

    NASA Astrophysics Data System (ADS)

    Zheng, Jianing; Guo, Yongcai; Huang, Yukun

    2016-09-01

    Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.

  2. A multi-product green supply chain under government supervision with price and demand uncertainty

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Ashkan; Zamani, Soma

    2018-05-01

    In this paper, a bi-level game-theoretic model is proposed to investigate the effects of governmental financial intervention on green supply chain. This problem is formulated as a bi-level program for a green supply chain that produces various products with different environmental pollution levels. The problem is also regard uncertainties in market demand and sale price of raw materials and products. The model is further transformed into a single-level nonlinear programming problem by replacing the lower-level optimization problem with its Karush-Kuhn-Tucker optimality conditions. Genetic algorithm is applied as a solution methodology to solve nonlinear programming model. Finally, to investigate the validity of the proposed method, the computational results obtained through genetic algorithm are compared with global optimal solution attained by enumerative method. Analytical results indicate that the proposed GA offers better solutions in large size problems. Also, we conclude that financial intervention by government consists of green taxation and subsidization is an effective method to stabilize green supply chain members' performance.

  3. A novel joint timing/frequency synchronization scheme based on Radon-Wigner transform of LFM signals in CO-OFDM systems

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Wei, Ying; Zeng, Xiangye; Lu, Jia; Zhang, Shuangxi; Wang, Mengjun

    2018-03-01

    A joint timing and frequency synchronization method has been proposed for coherent optical orthogonal frequency-division multiplexing (CO-OFDM) system in this paper. The timing offset (TO), integer frequency offset (FO) and the fractional FO can be realized by only one training symbol, which consists of two linear frequency modulation (LFM) signals with opposite chirp rates. By detecting the peak of LFM signals after Radon-Wigner transform (RWT), the TO and the integer FO can be estimated at the same time, moreover, the fractional FO can be acquired correspondingly through the self-correlation characteristic of the same training symbol. Simulation results show that the proposed method can give a more accurate TO estimation than the existing methods, especially at poor OSNR conditions; for the FO estimation, both the fractional and the integer FO can be estimated through the proposed training symbol with no extra overhead, a more accurate estimation and a large FO estimation range of [ - 5 GHz, 5GHz] can be acquired.

  4. Study on Finite Element Model Updating in Highway Bridge Static Loading Test Using Spatially-Distributed Optical Fiber Sensors

    PubMed Central

    Wu, Bitao; Lu, Huaxi; Chen, Bo; Gao, Zhicheng

    2017-01-01

    A finite model updating method that combines dynamic-static long-gauge strain responses is proposed for highway bridge static loading tests. For this method, the objective function consisting of static long-gauge stains and the first order modal macro-strain parameter (frequency) is established, wherein the local bending stiffness, density and boundary conditions of the structures are selected as the design variables. The relationship between the macro-strain and local element stiffness was studied first. It is revealed that the macro-strain is inversely proportional to the local stiffness covered by the long-gauge strain sensor. This corresponding relation is important for the modification of the local stiffness based on the macro-strain. The local and global parameters can be simultaneously updated. Then, a series of numerical simulation and experiments were conducted to verify the effectiveness of the proposed method. The results show that the static deformation, macro-strain and macro-strain modal can be predicted well by using the proposed updating model. PMID:28753912

  5. International journal of computational fluid dynamics real-time prediction of unsteady flow based on POD reduced-order model and particle filter

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru

    2016-04-01

    An integrated method consisting of a proper orthogonal decomposition (POD)-based reduced-order model (ROM) and a particle filter (PF) is proposed for real-time prediction of an unsteady flow field. The proposed method is validated using identical twin experiments of an unsteady flow field around a circular cylinder for Reynolds numbers of 100 and 1000. In this study, a PF is employed (ROM-PF) to modify the temporal coefficient of the ROM based on observation data because the prediction capability of the ROM alone is limited due to the stability issue. The proposed method reproduces the unsteady flow field several orders faster than a reference numerical simulation based on Navier-Stokes equations. Furthermore, the effects of parameters, related to observation and simulation, on the prediction accuracy are studied. Most of the energy modes of the unsteady flow field are captured, and it is possible to stably predict the long-term evolution with ROM-PF.

  6. Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data

    PubMed Central

    Hu, Jianhua; Wang, Peng; Qu, Annie

    2014-01-01

    Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433

  7. Effective Fingerprint Quality Estimation for Diverse Capture Sensors

    PubMed Central

    Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun

    2010-01-01

    Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632

  8. Study on Finite Element Model Updating in Highway Bridge Static Loading Test Using Spatially-Distributed Optical Fiber Sensors.

    PubMed

    Wu, Bitao; Lu, Huaxi; Chen, Bo; Gao, Zhicheng

    2017-07-19

    A finite model updating method that combines dynamic-static long-gauge strain responses is proposed for highway bridge static loading tests. For this method, the objective function consisting of static long-gauge stains and the first order modal macro-strain parameter (frequency) is established, wherein the local bending stiffness, density and boundary conditions of the structures are selected as the design variables. The relationship between the macro-strain and local element stiffness was studied first. It is revealed that the macro-strain is inversely proportional to the local stiffness covered by the long-gauge strain sensor. This corresponding relation is important for the modification of the local stiffness based on the macro-strain. The local and global parameters can be simultaneously updated. Then, a series of numerical simulation and experiments were conducted to verify the effectiveness of the proposed method. The results show that the static deformation, macro-strain and macro-strain modal can be predicted well by using the proposed updating model.

  9. Brain tissue segmentation in 4D CT using voxel classification

    NASA Astrophysics Data System (ADS)

    van den Boom, R.; Oei, M. T. H.; Lafebre, S.; Oostveen, L. J.; Meijer, F. J. A.; Steens, S. C. A.; Prokop, M.; van Ginneken, B.; Manniesing, R.

    2012-02-01

    A method is proposed to segment anatomical regions of the brain from 4D computer tomography (CT) patient data. The method consists of a three step voxel classification scheme, each step focusing on structures that are increasingly difficult to segment. The first step classifies air and bone, the second step classifies vessels and the third step classifies white matter, gray matter and cerebrospinal fluid. As features the time averaged intensity value and the temporal intensity change value were used. In each step, a k-Nearest-Neighbor classifier was used to classify the voxels. Training data was obtained by placing regions of interest in reconstructed 3D image data. The method has been applied to ten 4D CT cerebral patient data. A leave-one-out experiment showed consistent and accurate segmentation results.

  10. High resolution imaging of a subsonic projectile using automated mirrors with large aperture

    NASA Astrophysics Data System (ADS)

    Tateno, Y.; Ishii, M.; Oku, H.

    2017-02-01

    Visual tracking of high-speed projectiles is required for studying the aerodynamics around the objects. One solution to this problem is a tracking method based on the so-called 1 ms Auto Pan-Tilt (1ms-APT) system that we proposed in previous work, which consists of rotational mirrors and a high-speed image processing system. However, the images obtained with that system did not have high enough resolution to realize detailed measurement of the projectiles because of the size of the mirrors. In this study, we propose a new system consisting of enlarged mirrors for tracking a high-speed projectiles so as to achieve higher-resolution imaging, and we confirmed the effectiveness of the system via an experiment in which a projectile flying at subsonic speed tracked.

  11. Multi-Domain Transfer Learning for Early Diagnosis of Alzheimer's Disease.

    PubMed

    Cheng, Bo; Liu, Mingxia; Shen, Dinggang; Li, Zuoyong; Zhang, Daoqiang

    2017-04-01

    Recently, transfer learning has been successfully applied in early diagnosis of Alzheimer's Disease (AD) based on multi-domain data. However, most of existing methods only use data from a single auxiliary domain, and thus cannot utilize the intrinsic useful correlation information from multiple domains. Accordingly, in this paper, we consider the joint learning of tasks in multi-auxiliary domains and the target domain, and propose a novel Multi-Domain Transfer Learning (MDTL) framework for early diagnosis of AD. Specifically, the proposed MDTL framework consists of two key components: 1) a multi-domain transfer feature selection (MDTFS) model that selects the most informative feature subset from multi-domain data, and 2) a multi-domain transfer classification (MDTC) model that can identify disease status for early AD detection. We evaluate our method on 807 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database using baseline magnetic resonance imaging (MRI) data. The experimental results show that the proposed MDTL method can effectively utilize multi-auxiliary domain data for improving the learning performance in the target domain, compared with several state-of-the-art methods.

  12. Thermodynamically consistent data-driven computational mechanics

    NASA Astrophysics Data System (ADS)

    González, David; Chinesta, Francisco; Cueto, Elías

    2018-05-01

    In the paradigm of data-intensive science, automated, unsupervised discovering of governing equations for a given physical phenomenon has attracted a lot of attention in several branches of applied sciences. In this work, we propose a method able to avoid the identification of the constitutive equations of complex systems and rather work in a purely numerical manner by employing experimental data. In sharp contrast to most existing techniques, this method does not rely on the assumption on any particular form for the model (other than some fundamental restrictions placed by classical physics such as the second law of thermodynamics, for instance) nor forces the algorithm to find among a predefined set of operators those whose predictions fit best to the available data. Instead, the method is able to identify both the Hamiltonian (conservative) and dissipative parts of the dynamics while satisfying fundamental laws such as energy conservation or positive production of entropy, for instance. The proposed method is tested against some examples of discrete as well as continuum mechanics, whose accurate results demonstrate the validity of the proposed approach.

  13. Equivalent orthotropic elastic moduli identification method for laminated electrical steel sheets

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Nishikawa, Yasunari; Yamasaki, Shintaro; Fujita, Kikuo; Kawamoto, Atsushi; Kuroishi, Masakatsu; Nakai, Hideo

    2016-05-01

    In this paper, a combined numerical-experimental methodology for the identification of elastic moduli of orthotropic media is presented. Special attention is given to the laminated electrical steel sheets, which are modeled as orthotropic media with nine independent engineering elastic moduli. The elastic moduli are determined specifically for use with finite element vibration analyses. We propose a three-step methodology based on a conventional nonlinear least squares fit between measured and computed natural frequencies. The methodology consists of: (1) successive augmentations of the objective function by increasing the number of modes, (2) initial condition updates, and (3) appropriate selection of the natural frequencies based on their sensitivities on the elastic moduli. Using the results of numerical experiments, it is shown that the proposed method achieves more accurate converged solution than a conventional approach. Finally, the proposed method is applied to measured natural frequencies and mode shapes of the laminated electrical steel sheets. It is shown that the method can successfully identify the orthotropic elastic moduli that can reproduce the measured natural frequencies and frequency response functions by using finite element analyses with a reasonable accuracy.

  14. Inferring drug-disease associations based on known protein complexes.

    PubMed

    Yu, Liang; Huang, Jianbin; Ma, Zhixin; Zhang, Jing; Zou, Yapeng; Gao, Lin

    2015-01-01

    Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html.

  15. Toward unbiased estimations of the statefinder parameters

    NASA Astrophysics Data System (ADS)

    Aviles, Alejandro; Klapp, Jaime; Luongo, Orlando

    2017-09-01

    With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cosmography both in the errors and bias of the estimated statefinders. We further propose a one-parameter diagnostic to reject non-viable methods in cosmography.

  16. A new method for distortion magnetic field compensation of a geomagnetic vector measurement system

    NASA Astrophysics Data System (ADS)

    Liu, Zhongyan; Pan, Mengchun; Tang, Ying; Zhang, Qi; Geng, Yunling; Wan, Chengbiao; Chen, Dixiang; Tian, Wugang

    2016-12-01

    The geomagnetic vector measurement system mainly consists of three-axis magnetometer and an INS (inertial navigation system), which have many ferromagnetic parts on them. The magnetometer is always distorted by ferromagnetic parts and other electric equipments such as INS and power circuit module within the system, which can lead to geomagnetic vector measurement error of thousands of nT. Thus, the geomagnetic vector measurement system has to be compensated in order to guarantee the measurement accuracy. In this paper, a new distortion magnetic field compensation method is proposed, in which a permanent magnet with different relative positions is used to change the ambient magnetic field to construct equations of the error model parameters, and the parameters can be accurately estimated by solving linear equations. In order to verify effectiveness of the proposed method, the experiment is conducted, and the results demonstrate that, after compensation, the components errors of measured geomagnetic field are reduced significantly. It demonstrates that the proposed method can effectively improve the accuracy of the geomagnetic vector measurement system.

  17. Multi-Domain Transfer Learning for Early Diagnosis of Alzheimer’s Disease

    PubMed Central

    Cheng, Bo; Liu, Mingxia; Li, Zuoyong

    2017-01-01

    Recently, transfer learning has been successfully applied in early diagnosis of Alzheimer’s Disease (AD) based on multi-domain data. However, most of existing methods only use data from a single auxiliary domain, and thus cannot utilize the intrinsic useful correlation information from multiple domains. Accordingly, in this paper, we consider the joint learning of tasks in multi-auxiliary domains and the target domain, and propose a novel Multi-Domain Transfer Learning (MDTL) framework for early diagnosis of AD. Specifically, the proposed MDTL framework consists of two key components: 1) a multi-domain transfer feature selection (MDTFS) model that selects the most informative feature subset from multi-domain data, and 2) a multidomain transfer classification (MDTC) model that can identify disease status for early AD detection. We evaluate our method on 807 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline magnetic resonance imaging (MRI) data. The experimental results show that the proposed MDTL method can effectively utilize multi-auxiliary domain data for improving the learning performance in the target domain, compared with several state-of-the-art methods. PMID:27928657

  18. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  19. Inferring drug-disease associations based on known protein complexes

    PubMed Central

    2015-01-01

    Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html. PMID:26044949

  20. Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.

    PubMed

    Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko

    2017-07-01

    Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.

  1. Online frequency estimation with applications to engine and generator sets

    NASA Astrophysics Data System (ADS)

    Manngård, Mikael; Böling, Jari M.

    2017-07-01

    Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.

  2. Evaluation of fiber reinforced polymers using active infrared thermography system with thermoelectric cooling modules

    NASA Astrophysics Data System (ADS)

    Chady, Tomasz; Gorący, Krzysztof

    2018-04-01

    Active infrared thermography is increasingly used for nondestructive testing of various materials. Properties of this method are creating a unique possibility to utilize it for inspection of composites. In the case of active thermography, an external energy source is usually used to induce a thermal contrast inside tested objects. The conventional heating methods (like halogen lamps or flash lamps) are utilized for this purpose. In this study, we propose to use a cooling unit. The proposed system consists of a thermal imaging infrared camera, which is used to observe the surface of the inspected specimen and a specially designed cooling unit with thermoelectric modules (the Peltier modules).

  3. An optimization model for infrared image enhancement method based on p-q norm constrained by saliency value

    NASA Astrophysics Data System (ADS)

    Fan, Fan; Ma, Yong; Dai, Xiaobing; Mei, Xiaoguang

    2018-04-01

    Infrared image enhancement is an important and necessary task in the infrared imaging system. In this paper, by defining the contrast in terms of the area between adjacent non-zero histogram, a novel analytical model is proposed to enlarge the areas so that the contrast can be increased. In addition, the analytical model is regularized by a penalty term based on the saliency value to enhance the salient regions as well. Thus, both of the whole images and salient regions can be enhanced, and the rank consistency can be preserved. The comparisons on 8-bit images show that the proposed method can enhance the infrared images with more details.

  4. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    PubMed

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  5. Constrained tracking control for nonlinear systems.

    PubMed

    Khani, Fatemeh; Haeri, Mohammad

    2017-09-01

    This paper proposes a tracking control strategy for nonlinear systems without needing a prior knowledge of the reference trajectory. The proposed method consists of a set of local controllers with appropriate overlaps in their stability regions and an on-line switching strategy which implements these controllers and uses some augmented intermediate controllers to ensure steering the system states to the desired set points without needing to redesign the controller for each value of set point changes. The proposed approach provides smooth transient responses despite switching among the local controllers. It should be mentioned that the stability regions of the proposed controllers could be estimated off-line for a range of set-point changes. The efficiencies of the proposed algorithm are illustrated via two example simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. A postprocessing method in the HMC framework for predicting gene function based on biological instrumental data

    NASA Astrophysics Data System (ADS)

    Feng, Shou; Fu, Ping; Zheng, Wenbin

    2018-03-01

    Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.

  7. A structural topological optimization method for multi-displacement constraints and any initial topology configuration

    NASA Astrophysics Data System (ADS)

    Rong, J. H.; Yi, J. H.

    2010-10-01

    In density-based topological design, one expects that the final result consists of elements either black (solid material) or white (void), without any grey areas. Moreover, one also expects that the optimal topology can be obtained by starting from any initial topology configuration. An improved structural topological optimization method for multi- displacement constraints is proposed in this paper. In the proposed method, the whole optimization process is divided into two optimization adjustment phases and a phase transferring step. Firstly, an optimization model is built to deal with the varied displacement limits, design space adjustments, and reasonable relations between the element stiffness matrix and mass and its element topology variable. Secondly, a procedure is proposed to solve the optimization problem formulated in the first optimization adjustment phase, by starting with a small design space and advancing to a larger deign space. The design space adjustments are automatic when the design domain needs expansions, in which the convergence of the proposed method will not be affected. The final topology obtained by the proposed procedure in the first optimization phase, can approach to the vicinity of the optimum topology. Then, a heuristic algorithm is given to improve the efficiency and make the designed structural topology black/white in both the phase transferring step and the second optimization adjustment phase. And the optimum topology can finally be obtained by the second phase optimization adjustments. Two examples are presented to show that the topologies obtained by the proposed method are of very good 0/1 design distribution property, and the computational efficiency is enhanced by reducing the element number of the design structural finite model during two optimization adjustment phases. And the examples also show that this method is robust and practicable.

  8. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.

  9. Preprocessing method to correct illumination pattern in sinusoidal-based structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Shabani, H.; Doblas, A.; Saavedra, G.; Preza, C.

    2018-02-01

    The restored images in structured illumination microscopy (SIM) can be affected by residual fringes due to a mismatch between the illumination pattern and the sinusoidal model assumed by the restoration method. When a Fresnel biprism is used to generate a structured pattern, this pattern cannot be described by a pure sinusoidal function since it is distorted by an envelope due to the biprism's edge. In this contribution, we have investigated the effect of the envelope on the restored SIM images and propose a computational method in order to address it. The proposed approach to reduce the effect of the envelope consists of two parts. First, the envelope of the structured pattern, determined through calibration data, is removed from the raw SIM data via a preprocessing step. In the second step, a notch filter is applied to the images, which are restored using the well-known generalized Wiener filter, to filter any residual undesired fringes. The performance of our approach has been evaluated numerically by simulating the effect of the envelope on synthetic forward images of a 6-μm spherical bead generated using the real pattern and then restored using the SIM approach that is based on an ideal pure sinusoidal function before and after our proposed correction method. The simulation result shows 74% reduction in the contrast of the residual pattern when the proposed method is applied. Experimental results from a pollen grain sample also validate the proposed approach.

  10. 3D reconstruction from non-uniform point clouds via local hierarchical clustering

    NASA Astrophysics Data System (ADS)

    Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo

    2017-07-01

    Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.

  11. a New Model for Fuzzy Personalized Route Planning Using Fuzzy Linguistic Preference Relation

    NASA Astrophysics Data System (ADS)

    Nadi, S.; Houshyaripour, A. H.

    2017-09-01

    This paper proposes a new model for personalized route planning under uncertain condition. Personalized routing, involves different sources of uncertainty. These uncertainties can be raised from user's ambiguity about their preferences, imprecise criteria values and modelling process. The proposed model uses Fuzzy Linguistic Preference Relation Analytical Hierarchical Process (FLPRAHP) to analyse user's preferences under uncertainty. Routing is a multi-criteria task especially in transportation networks, where the users wish to optimize their routes based on different criteria. However, due to the lake of knowledge about the preferences of different users and uncertainties available in the criteria values, we propose a new personalized fuzzy routing method based on the fuzzy ranking using center of gravity. The model employed FLPRAHP method to aggregate uncertain criteria values regarding uncertain user's preferences while improve consistency with least possible comparisons. An illustrative example presents the effectiveness and capability of the proposed model to calculate best personalize route under fuzziness and uncertainty.

  12. Reliability-based design optimization of reinforced concrete structures including soil-structure interaction using a discrete gravitational search algorithm and a proposed metamodel

    NASA Astrophysics Data System (ADS)

    Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.

    2013-10-01

    A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.

  13. Accumulating pyramid spatial-spectral collaborative coding divergence for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Zou, Huanxin; Zhou, Shilin

    2016-03-01

    Detection of anomalous targets of various sizes in hyperspectral data has received a lot of attention in reconnaissance and surveillance applications. Many anomaly detectors have been proposed in literature. However, current methods are susceptible to anomalies in the processing window range and often make critical assumptions about the distribution of the background data. Motivated by the fact that anomaly pixels are often distinctive from their local background, in this letter, we proposed a novel hyperspectral anomaly detection framework for real-time remote sensing applications. The proposed framework consists of four major components, sparse feature learning, pyramid grid window selection, joint spatial-spectral collaborative coding and multi-level divergence fusion. It exploits the collaborative representation difference in the feature space to locate potential anomalies and is totally unsupervised without any prior assumptions. Experimental results on airborne recorded hyperspectral data demonstrate that the proposed methods adaptive to anomalies in a large range of sizes and is well suited for parallel processing.

  14. A new method for testing the scale-factor performance of fiber optical gyroscope

    NASA Astrophysics Data System (ADS)

    Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin

    2015-10-01

    Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.

  15. Disease gene classification with metagraph representations.

    PubMed

    Kircali Ata, Sezin; Fang, Yuan; Wu, Min; Li, Xiao-Li; Xiao, Xiaokui

    2017-12-01

    Protein-protein interaction (PPI) networks play an important role in studying the functional roles of proteins, including their association with diseases. However, protein interaction networks are not sufficient without the support of additional biological knowledge for proteins such as their molecular functions and biological processes. To complement and enrich PPI networks, we propose to exploit biological properties of individual proteins. More specifically, we integrate keywords describing protein properties into the PPI network, and construct a novel PPI-Keywords (PPIK) network consisting of both proteins and keywords as two different types of nodes. As disease proteins tend to have a similar topological characteristics on the PPIK network, we further propose to represent proteins with metagraphs. Different from a traditional network motif or subgraph, a metagraph can capture a particular topological arrangement involving the interactions/associations between both proteins and keywords. Based on the novel metagraph representations for proteins, we further build classifiers for disease protein classification through supervised learning. Our experiments on three different PPI databases demonstrate that the proposed method consistently improves disease protein prediction across various classifiers, by 15.3% in AUC on average. It outperforms the baselines including the diffusion-based methods (e.g., RWR) and the module-based methods by 13.8-32.9% for overall disease protein prediction. For predicting breast cancer genes, it outperforms RWR, PRINCE and the module-based baselines by 6.6-14.2%. Finally, our predictions also turn out to have better correlations with literature findings from PubMed. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Three-dimensional cascaded lattice Boltzmann method: Improved implementation and consistent forcing scheme

    NASA Astrophysics Data System (ADS)

    Fei, Linlin; Luo, Kai H.; Li, Qing

    2018-05-01

    The cascaded or central-moment-based lattice Boltzmann method (CLBM) proposed in [Phys. Rev. E 73, 066705 (2006), 10.1103/PhysRevE.73.066705] possesses very good numerical stability. However, two constraints exist in three-dimensional (3D) CLBM simulations. First, the conventional implementation for 3D CLBM involves cumbersome operations and requires much higher computational cost compared to the single-relaxation-time (SRT) LBM. Second, it is a challenge to accurately incorporate a general force field into the 3D CLBM. In this paper, we present an improved method to implement CLBM in 3D. The main strategy is to adopt a simplified central moment set and carry out the central-moment-based collision operator based on a general multi-relaxation-time (GMRT) framework. Next, the recently proposed consistent forcing scheme for CLBM [Fei and Luo, Phys. Rev. E 96, 053307 (2017), 10.1103/PhysRevE.96.053307] is extended to incorporate a general force field into 3D CLBM. Compared with the recently developed nonorthogonal CLBM [Rosis, Phys. Rev. E 95, 013310 (2017), 10.1103/PhysRevE.95.013310], our implementation is proved to reduce the computational cost significantly. The inconsistency of adopting the discrete equilibrium distribution functions in the nonorthogonal CLBM is analyzed and validated. The 3D CLBM developed here in conjunction with the consistent forcing scheme is verified through numerical simulations of several canonical force-driven flows, highlighting very good properties in terms of accuracy, convergence, and consistency with the nonslip rule. Finally, the techniques developed here for 3D CLBM can be applied to make the implementation and execution of 3D MRT-LBM more efficient.

  17. Technical Note: A Feasibility Study of Using the Flat Panel Detector on Linac for the kV X-ray Generator Test.

    PubMed

    Cai, Bin; Dolly, Steven; Kamal, Gregory; Yaddanapudi, Sridhar; Sun, Baozhou; Goddu, S Murty; Mutic, Sasa; Li, Hua

    2018-04-28

    To investigate the feasibility of using kV flat panel detector on linac for consistency evaluations of kV X-ray generator performance. An in-house designed aluminum (Al) array phantom with six 9×9 cm 2 square regions having various thickness was proposed and used in this study. Through XML script-driven image acquisition, kV images with various acquisition settings were obtained using the kV flat panel detector. Utilizing pre-established baseline curves, the consistency of X-ray tube output characteristics including tube voltage accuracy, exposure accuracy and exposure linearity were assessed through image quality assessment metrics including ROI mean intensity, ROI standard deviation (SD) and noise power spectrums (NPS). The robustness of this method was tested on two linacs for a three-month period. With the proposed method, tube voltage accuracy can be verified through conscience check with a 2% tolerance and 2 kVp intervals for forty different kVp settings. The exposure accuracy can be tested with a 4% consistency tolerance for three mAs settings over forty kVp settings. The exposure linearity tested with three mAs settings achieved a coefficient of variation (CV) of 0.1. We proposed a novel approach that uses the kV flat panel detector available on linac for X-ray generator test. This approach eliminates the inefficiencies and variability associated with using third party QA detectors while enabling an automated process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Influence of physicochemical properties of rice flour on oil uptake of tempura frying batter.

    PubMed

    Nakamura, Sumiko; Ohtsubo, Ken'ichi

    2010-01-01

    The physicochemical properties of rice flour and wheat flour influenced the oil uptake of tempura frying batter. Rice flour was better than wheat flour in the overall quality and crispness of the fried tempura batter. Rice flour resisted oil absorption more than wheat flour, and a higher level of apparent starch amylose and higher consistency/breakdown ratio of the pasting properties led to a lower oil uptake of the batter. Super hard EM10 rice showed the highest apparent amylose content and higher consistency/breakdown ratio than the other flour samples, the batter from EM10 revealing the lowest oil content after frying among all the batters examined. The apparent amylose content, consistency/breakdown ratio and oil absorption index are proposed as useful guides for oil absorption when frying from among the physicochemical properties that influence the oil content of fried batter. Our proposal for the "oil absorption index" could be a simple, although not perfect method for estimating the oil content of batter flour.

  19. Self-consistent theory of nanodomain formation on non-polar surfaces of ferroelectrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozovska, Anna N.; Obukhovskii, Vyacheslav; Fomichov, Evhen

    2016-04-28

    We propose a self-consistent theoretical approach capable of describing the features of the anisotropic nanodomain formation induced by a strongly inhomogeneous electric field of a charged scanning probe microscopy tip on nonpolar cuts of ferroelectrics. We obtained that a threshold field, previously regarded as an isotropic parameter, is an anisotropic function that is specified from the polar properties and lattice pinning anisotropy of a given ferroelectric in a self-consistent way. The proposed method for the calculation of the anisotropic threshold field is not material specific, thus the field should be anisotropic in all ferroelectrics with the spontaneous polarization anisotropy alongmore » the main crystallographic directions. The most evident examples are uniaxial ferroelectrics, layered ferroelectric perovskites, and low-symmetry incommensurate ferroelectrics. Obtained results quantitatively describe the differences at several times in the nanodomain length experimentally observed on X and Y cuts of LiNbO3 and can give insight into the anisotropic dynamics of nanoscale polarization reversal in strongly inhomogeneous electric fields.« less

  20. Discontinuous Finite Element Quasidiffusion Methods

    DOE PAGES

    Anistratov, Dmitriy Yurievich; Warsa, James S.

    2018-05-21

    Here in this paper, two-level methods for solving transport problems in one-dimensional slab geometry based on the quasi-diffusion (QD) method are developed. A linear discontinuous finite element method (LDFEM) is derived for the spatial discretization of the low-order QD (LOQD) equations. It involves special interface conditions at the cell edges based on the idea of QD boundary conditions (BCs). We consider different kinds of QD BCs to formulate the necessary cell-interface conditions. We develop two-level methods with independent discretization of the high-order transport equation and LOQD equations, where the transport equation is discretized using the method of characteristics and themore » LDFEM is applied to the LOQD equations. We also formulate closures that lead to the discretization consistent with a LDFEM discretization of the transport equation. The proposed methods are studied by means of test problems formulated with the method of manufactured solutions. Numerical experiments are presented demonstrating the performance of the proposed methods. Lastly, we also show that the method with independent discretization has the asymptotic diffusion limit.« less

  1. Discontinuous Finite Element Quasidiffusion Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anistratov, Dmitriy Yurievich; Warsa, James S.

    Here in this paper, two-level methods for solving transport problems in one-dimensional slab geometry based on the quasi-diffusion (QD) method are developed. A linear discontinuous finite element method (LDFEM) is derived for the spatial discretization of the low-order QD (LOQD) equations. It involves special interface conditions at the cell edges based on the idea of QD boundary conditions (BCs). We consider different kinds of QD BCs to formulate the necessary cell-interface conditions. We develop two-level methods with independent discretization of the high-order transport equation and LOQD equations, where the transport equation is discretized using the method of characteristics and themore » LDFEM is applied to the LOQD equations. We also formulate closures that lead to the discretization consistent with a LDFEM discretization of the transport equation. The proposed methods are studied by means of test problems formulated with the method of manufactured solutions. Numerical experiments are presented demonstrating the performance of the proposed methods. Lastly, we also show that the method with independent discretization has the asymptotic diffusion limit.« less

  2. Model-based estimation and control for off-axis parabolic mirror alignment

    NASA Astrophysics Data System (ADS)

    Fang, Joyce; Savransky, Dmitry

    2018-02-01

    This paper propose an model-based estimation and control method for an off-axis parabolic mirror (OAP) alignment. Current studies in automated optical alignment systems typically require additional wavefront sensors. We propose a self-aligning method using only focal plane images captured by the existing camera. Image processing methods and Karhunen-Loève (K-L) decomposition are used to extract measurements for the observer in closed-loop control system. Our system has linear dynamic in state transition, and a nonlinear mapping from the state to the measurement. An iterative extended Kalman filter (IEKF) is shown to accurately predict the unknown states, and nonlinear observability is discussed. Linear-quadratic regulator (LQR) is applied to correct the misalignments. The method is validated experimentally on the optical bench with a commercial OAP. We conduct 100 tests in the experiment to demonstrate the consistency in between runs.

  3. Bias correction of nutritional status estimates when reported age is used for calculating WHO indicators in children under five years of age.

    PubMed

    Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia

    2016-06-01

    To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.

  4. Automatic extraction of blocks from 3D point clouds of fractured rock

    NASA Astrophysics Data System (ADS)

    Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen

    2017-12-01

    This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.

  5. A method to eliminate the influence of incident light variations in spectral analysis

    NASA Astrophysics Data System (ADS)

    Luo, Yongshun; Li, Gang; Fu, Zhigang; Guan, Yang; Zhang, Shengzhao; Lin, Ling

    2018-06-01

    The intensity of the light source and consistency of the spectrum are the most important factors influencing the accuracy in quantitative spectrometric analysis. An efficient "measuring in layer" method was proposed in this paper to limit the influence of inconsistencies in the intensity and spectrum of the light source. In order to verify the effectiveness of this method, a light source with a variable intensity and spectrum was designed according to Planck's law and Wien's displacement law. Intra-lipid samples with 12 different concentrations were prepared and divided into modeling sets and prediction sets according to different incident lights and solution concentrations. The spectra of each sample were measured with five different light intensities. The experimental results showed that the proposed method was effective in eliminating the influence caused by incident light changes and was more effective than normalized processing.

  6. Exact stochastic unraveling of an optical coherence dynamics by cumulant expansion

    NASA Astrophysics Data System (ADS)

    Olšina, Jan; Kramer, Tobias; Kreisbeck, Christoph; Mančal, Tomáš

    2014-10-01

    A numerically exact Monte Carlo scheme for calculation of open quantum system dynamics is proposed and implemented. The method consists of a Monte Carlo summation of a perturbation expansion in terms of trajectories in Liouville phase-space with respect to the coupling between the excited states of the molecule. The trajectories are weighted by a complex decoherence factor based on the second-order cumulant expansion of the environmental evolution. The method can be used with an arbitrary environment characterized by a general correlation function and arbitrary coupling strength. It is formally exact for harmonic environments, and it can be used with arbitrary temperature. Time evolution of an optically excited Frenkel exciton dimer representing a molecular exciton interacting with a charge transfer state is calculated by the proposed method. We calculate the evolution of the optical coherence elements of the density matrix and linear absorption spectrum, and compare them with the predictions of standard simulation methods.

  7. Application of 2D graphic representation of protein sequence based on Huffman tree method.

    PubMed

    Qi, Zhao-Hui; Feng, Jun; Qi, Xiao-Qin; Li, Ling

    2012-05-01

    Based on Huffman tree method, we propose a new 2D graphic representation of protein sequence. This representation can completely avoid loss of information in the transfer of data from a protein sequence to its graphic representation. The method consists of two parts. One is about the 0-1 codes of 20 amino acids by Huffman tree with amino acid frequency. The amino acid frequency is defined as the statistical number of an amino acid in the analyzed protein sequences. The other is about the 2D graphic representation of protein sequence based on the 0-1 codes. Then the applications of the method on ten ND5 genes and seven Escherichia coli strains are presented in detail. The results show that the proposed model may provide us with some new sights to understand the evolution patterns determined from protein sequences and complete genomes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Automatic Train Operation Using Autonomic Prediction of Train Runs

    NASA Astrophysics Data System (ADS)

    Asuka, Masashi; Kataoka, Kenji; Komaya, Kiyotoshi; Nishida, Syogo

    In this paper, we present an automatic train control method adaptable to disturbed train traffic conditions. The proposed method presumes transmission of detected time of a home track clearance to trains approaching to the station by employing equipment of Digital ATC (Automatic Train Control). Using the information, each train controls its acceleration by the method that consists of two approaches. First, by setting a designated restricted speed, the train controls its running time to arrive at the next station in accordance with predicted delay. Second, the train predicts the time at which it will reach the current braking pattern generated by Digital ATC, along with the time when the braking pattern transits ahead. By comparing them, the train correctly chooses the coasting drive mode in advance to avoid deceleration due to the current braking pattern. We evaluated the effectiveness of the proposed method regarding driving conditions, energy consumption and reduction of delays by simulation.

  9. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  10. A Compact Immunoassay Platform Based on a Multicapillary Glass Plate

    PubMed Central

    Xue, Shuhua; Zeng, Hulie; Yang, Jianmin; Nakajima, Hizuru; Uchiyama, Katsumi

    2014-01-01

    A highly sensitive, rapid immunoassay performed in the multi-channels of a micro-well array consisting of a multicapillary glass plate (MCP) and a polydimethylsiloxane (PDMS) slide is described. The micro-dimensions and large surface area of the MCP permitted the diffusion distance to be decreased and the reaction efficiency to be increased. To confirm the concept of the method, human immunoglobulin A (h-IgA) was measured using both the proposed immunoassay system and the traditional 96-well plate method. The proposed method resulted in a 1/5-fold decrease of immunoassay time, and a 1/56-fold cut in reagent consumption with a 0.05 ng/mL of limit of detection (LOD) for IgA. The method was also applied to saliva samples obtained from healthy volunteers. The results correlated well to those obtained by the 96-well plate method. The method has the potential for use in disease diagnostic or on-site immunoassays. PMID:24859022

  11. The Determination of Sugars in Dairy Products: Development of a New Standard Method for the International Dairy Federation and the Internal Organization for Standardization.

    PubMed

    Sanders, Peter; Ernste-Nota, Veronica; Visser, Klaas; van Soest, Jeroen; Brunt, Kommer

    2017-09-01

    A method using high-performance anion-exchange chromatography (HPAEC) with a pulsed amperometric detector (PAD) for the determination of mono- and disaccharides is described. The method was accepted by the International Dairy Federation and the Internal Organization for Standardization as a new work item for the determination of sugars in dairy matrixes, and the Milk and Milk Products technical committee of ISO/TC 34/SC 5 accepted the topic "Milk and milk products - Determination of the sugar contents - High-performance anion-exchange chromatographic method (HPAEC-PAD)" as a new work item. The proposed method consists of an aqueous ethanol extraction of the sugars in the dairy sample, followed by clarification with Carrez I and II reagents. The clarified filtrate is diluted and then directly introduced in the HPAEC-PAD system for quantification of the sugars. A single-laboratory validation of the proposed method has been scheduled for spring 2017.

  12. A combined experimental-modelling method for the detection and analysis of pollution in coastal zones

    NASA Astrophysics Data System (ADS)

    Limić, Nedzad; Valković, Vladivoj

    1996-04-01

    Pollution of coastal seas with toxic substances can be efficiently detected by examining toxic materials in sediment samples. These samples contain information on the overall pollution from surrounding sources such as yacht anchorages, nearby industries, sewage systems, etc. In an efficient analysis of pollution one must determine the contribution from each individual source. In this work it is demonstrated that a modelling method can be utilized for solving this latter problem. The modelling method is based on a unique interpretation of concentrations in sediments from all sampling stations. The proposed method is a synthesis consisting of the utilization of PIXE as an efficient method of pollution concentration determination and the code ANCOPOL (N. Limic and R. Benis, The computer code ANCOPOL, SimTel/msdos/geology, 1994 [1]) for the calculation of contributions from the main polluters. The efficiency and limits of the proposed method are demonstrated by discussing trace element concentrations in sediments of Punat Bay on the island of Krk in Croatia.

  13. Lung lobe segmentation based on statistical atlas and graph cuts

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a novel method that can extract lung lobes by utilizing probability atlas and multilabel graph cuts. Information about pulmonary structures plays very important role for decision of the treatment strategy and surgical planning. The human lungs are divided into five anatomical regions, the lung lobes. Precise segmentation and recognition of lung lobes are indispensable tasks in computer aided diagnosis systems and computer aided surgery systems. A lot of methods for lung lobe segmentation are proposed. However, these methods only target the normal cases. Therefore, these methods cannot extract the lung lobes in abnormal cases, such as COPD cases. To extract lung lobes in abnormal cases, this paper propose a lung lobe segmentation method based on probability atlas of lobe location and multilabel graph cuts. The process consists of three components; normalization based on the patient's physique, probability atlas generation, and segmentation based on graph cuts. We apply this method to six cases of chest CT images including COPD cases. Jaccard index was 79.1%.

  14. A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices.

    PubMed

    Ravi, Daniele; Wong, Charence; Lo, Benny; Yang, Guang-Zhong

    2017-01-01

    The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.

  15. A method for improved accuracy in three dimensions for determining wheel/rail contact points

    NASA Astrophysics Data System (ADS)

    Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang

    2015-11-01

    Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.

  16. A comparative study of progressive versus successive spectrophotometric resolution techniques applied for pharmaceutical ternary mixtures.

    PubMed

    Saleh, Sarah S; Lotfy, Hayam M; Hassan, Nagiba Y; Salem, Hesham

    2014-11-11

    This work represents a comparative study of a novel progressive spectrophotometric resolution technique namely, amplitude center method (ACM), versus the well-established successive spectrophotometric resolution techniques namely; successive derivative subtraction (SDS); successive derivative of ratio spectra (SDR) and mean centering of ratio spectra (MCR). All the proposed spectrophotometric techniques consist of several consecutive steps utilizing ratio and/or derivative spectra. The novel amplitude center method (ACM) can be used for the determination of ternary mixtures using single divisor where the concentrations of the components are determined through progressive manipulation performed on the same ratio spectrum. Those methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the official BP methods, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Automatic Microaneurysms Detection Based on Multifeature Fusion Dictionary Learning

    PubMed Central

    Wang, Zhenzhu; Du, Wenyou

    2017-01-01

    Recently, microaneurysm (MA) detection has attracted a lot of attention in the medical image processing community. Since MAs can be seen as the earliest lesions in diabetic retinopathy, their detection plays a critical role in diabetic retinopathy diagnosis. In this paper, we propose a novel MA detection approach named multifeature fusion dictionary learning (MFFDL). The proposed method consists of four steps: preprocessing, candidate extraction, multifeature dictionary learning, and classification. The novelty of our proposed approach lies in incorporating the semantic relationships among multifeatures and dictionary learning into a unified framework for automatic detection of MAs. We evaluate the proposed algorithm by comparing it with the state-of-the-art approaches and the experimental results validate the effectiveness of our algorithm. PMID:28421125

  18. Automatic Microaneurysms Detection Based on Multifeature Fusion Dictionary Learning.

    PubMed

    Zhou, Wei; Wu, Chengdong; Chen, Dali; Wang, Zhenzhu; Yi, Yugen; Du, Wenyou

    2017-01-01

    Recently, microaneurysm (MA) detection has attracted a lot of attention in the medical image processing community. Since MAs can be seen as the earliest lesions in diabetic retinopathy, their detection plays a critical role in diabetic retinopathy diagnosis. In this paper, we propose a novel MA detection approach named multifeature fusion dictionary learning (MFFDL). The proposed method consists of four steps: preprocessing, candidate extraction, multifeature dictionary learning, and classification. The novelty of our proposed approach lies in incorporating the semantic relationships among multifeatures and dictionary learning into a unified framework for automatic detection of MAs. We evaluate the proposed algorithm by comparing it with the state-of-the-art approaches and the experimental results validate the effectiveness of our algorithm.

  19. A modified priority list-based MILP method for solving large-scale unit commitment problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ke, Xinda; Lu, Ning; Wu, Di

    This paper studies the typical pattern of unit commitment (UC) results in terms of generator’s cost and capacity. A method is then proposed to combine a modified priority list technique with mixed integer linear programming (MILP) for UC problem. The proposed method consists of two steps. At the first step, a portion of generators are predetermined to be online or offline within a look-ahead period (e.g., a week), based on the demand curve and generator priority order. For the generators whose on/off status is predetermined, at the second step, the corresponding binary variables are removed from the UC MILP problemmore » over the operational planning horizon (e.g., 24 hours). With a number of binary variables removed, the resulted problem can be solved much faster using the off-the-shelf MILP solvers, based on the branch-and-bound algorithm. In the modified priority list method, scale factors are designed to adjust the tradeoff between solution speed and level of optimality. It is found that the proposed method can significantly speed up the UC problem with minor compromise in optimality by selecting appropriate scale factors.« less

  20. Prediction of Heterodimeric Protein Complexes from Weighted Protein-Protein Interaction Networks Using Novel Features and Kernel Functions

    PubMed Central

    Ruan, Peiying; Hayashida, Morihiro; Maruyama, Osamu; Akutsu, Tatsuya

    2013-01-01

    Since many proteins express their functional activity by interacting with other proteins and forming protein complexes, it is very useful to identify sets of proteins that form complexes. For that purpose, many prediction methods for protein complexes from protein-protein interactions have been developed such as MCL, MCODE, RNSC, PCP, RRW, and NWE. These methods have dealt with only complexes with size of more than three because the methods often are based on some density of subgraphs. However, heterodimeric protein complexes that consist of two distinct proteins occupy a large part according to several comprehensive databases of known complexes. In this paper, we propose several feature space mappings from protein-protein interaction data, in which each interaction is weighted based on reliability. Furthermore, we make use of prior knowledge on protein domains to develop feature space mappings, domain composition kernel and its combination kernel with our proposed features. We perform ten-fold cross-validation computational experiments. These results suggest that our proposed kernel considerably outperforms the naive Bayes-based method, which is the best existing method for predicting heterodimeric protein complexes. PMID:23776458

  1. A new measurement method of actual focal spot position of an x-ray tube using a high-precision carbon-interspaced grid

    NASA Astrophysics Data System (ADS)

    Lee, H. W.; Lim, H. W.; Jeon, D. H.; Park, C. K.; Cho, H. S.; Seo, C. W.; Lee, D. Y.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Woo, T. H.; Oh, J. E.

    2018-06-01

    This study investigated the effectiveness of a new method for measuring the actual focal spot position of a diagnostic x-ray tube using a high-precision antiscatter grid and a digital x-ray detector in which grid magnification, which is directly related to the focal spot position, was determined from the Fourier spectrum of the acquired x-ray grid’s image. A systematic experiment was performed to demonstrate the viability of the proposed measurement method. The hardware system used in the experiment consisted of an x-ray tube run at 50 kVp and 1 mA, a flat-panel detector with a pixel size of 49.5 µm, and a high-precision carbon-interspaced grid with a strip density of 200 lines/inch. The results indicated that the focal spot of the x-ray tube (Jupiter 5000, Oxford Instruments) used in the experiment was located approximately 31.10 mm inside from the exit flange, well agreed with the nominal value of 31.05 mm, which demonstrates the viability of the proposed measurement method. Thus, the proposed method can be utilized for system’s performance optimization in many x-ray imaging applications.

  2. Event time analysis of longitudinal neuroimage data.

    PubMed

    Sabuncu, Mert R; Bernal-Rusiel, Jorge L; Reuter, Martin; Greve, Douglas N; Fischl, Bruce

    2014-08-15

    This paper presents a method for the statistical analysis of the associations between longitudinal neuroimaging measurements, e.g., of cortical thickness, and the timing of a clinical event of interest, e.g., disease onset. The proposed approach consists of two steps, the first of which employs a linear mixed effects (LME) model to capture temporal variation in serial imaging data. The second step utilizes the extended Cox regression model to examine the relationship between time-dependent imaging measurements and the timing of the event of interest. We demonstrate the proposed method both for the univariate analysis of image-derived biomarkers, e.g., the volume of a structure of interest, and the exploratory mass-univariate analysis of measurements contained in maps, such as cortical thickness and gray matter density. The mass-univariate method employs a recently developed spatial extension of the LME model. We applied our method to analyze structural measurements computed using FreeSurfer, a widely used brain Magnetic Resonance Image (MRI) analysis software package. We provide a quantitative and objective empirical evaluation of the statistical performance of the proposed method on longitudinal data from subjects suffering from Mild Cognitive Impairment (MCI) at baseline. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. The Development of a Noncontact Letter Input Interface “Fingual” Using Magnetic Dataset

    NASA Astrophysics Data System (ADS)

    Fukushima, Taishi; Miyazaki, Fumio; Nishikawa, Atsushi

    We have newly developed a noncontact letter input interface called “Fingual”. Fingual uses a glove mounted with inexpensive and small magnetic sensors. Using the glove, users can input letters to form the finger alphabets, a kind of sign language. The proposed method uses some dataset which consists of magnetic field and the corresponding letter information. In this paper, we show two recognition methods using the dataset. First method uses Euclidean norm, and second one additionally uses Gaussian function as a weighting function. Then we conducted verification experiments for the recognition rate of each method in two situations. One of the situations is that subjects used their own dataset; the other is that they used another person's dataset. As a result, the proposed method could recognize letters with a high rate in both situations, even though it is better to use their own dataset than to use another person's dataset. Though Fingual needs to collect magnetic dataset for each letter in advance, its feature is the ability to recognize letters without the complicated calculations such as inverse problems. This paper shows results of the recognition experiments, and shows the utility of the proposed system “Fingual”.

  4. Electrohydrodynamic assisted droplet alignment for lens fabrication by droplet evaporation

    NASA Astrophysics Data System (ADS)

    Wang, Guangxu; Deng, Jia; Guo, Xing

    2018-04-01

    Lens fabrication by droplet evaporation has attracted a lot of attention since the fabrication approach is simple and moldless. Droplet position accuracy is a critical parameter in this approach, and thus it is of great importance to use accurate methods to realize the droplet position alignment. In this paper, we propose an electrohydrodynamic (EHD) assisted droplet alignment method. An electrostatic force was induced at the interface between materials to overcome the surface tension and gravity. The deviation of droplet position from the center region was eliminated and alignment was successfully realized. We demonstrated the capability of the proposed method theoretically and experimentally. First, we built a simulation model coupled with the three-phase flow formulations and the EHD equations to study the three-phase flowing process in an electric field. Results show that it is the uneven electric field distribution that leads to the relative movement of the droplet. Then, we conducted experiments to verify the method. Experimental results are consistent with the numerical simulation results. Moreover, we successfully fabricated a crater lens after applying the proposed method. A light emitting diode module packaging with the fabricated crater lens shows a significant light intensity distribution adjustment compared with a spherical cap lens.

  5. Double inverse-weighted estimation of cumulative treatment effects under nonproportional hazards and dependent censoring.

    PubMed

    Schaubel, Douglas E; Wei, Guanghui

    2011-03-01

    In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race. © 2010, The International Biometric Society.

  6. An ensemble approach for large-scale identification of protein-protein interactions using the alignments of multiple sequences

    PubMed Central

    Wang, Lei; You, Zhu-Hong; Chen, Xing; Li, Jian-Qiang; Yan, Xin; Zhang, Wei; Huang, Yu-An

    2017-01-01

    Protein–Protein Interactions (PPI) is not only the critical component of various biological processes in cells, but also the key to understand the mechanisms leading to healthy and diseased states in organisms. However, it is time-consuming and cost-intensive to identify the interactions among proteins using biological experiments. Hence, how to develop a more efficient computational method rapidly became an attractive topic in the post-genomic era. In this paper, we propose a novel method for inference of protein-protein interactions from protein amino acids sequences only. Specifically, protein amino acids sequence is firstly transformed into Position-Specific Scoring Matrix (PSSM) generated by multiple sequences alignments; then the Pseudo PSSM is used to extract feature descriptors. Finally, ensemble Rotation Forest (RF) learning system is trained to predict and recognize PPIs based solely on protein sequence feature. When performed the proposed method on the three benchmark data sets (Yeast, H. pylori, and independent dataset) for predicting PPIs, our method can achieve good average accuracies of 98.38%, 89.75%, and 96.25%, respectively. In order to further evaluate the prediction performance, we also compare the proposed method with other methods using same benchmark data sets. The experiment results demonstrate that the proposed method consistently outperforms other state-of-the-art method. Therefore, our method is effective and robust and can be taken as a useful tool in exploring and discovering new relationships between proteins. A web server is made publicly available at the URL http://202.119.201.126:8888/PsePSSM/ for academic use. PMID:28029645

  7. Automatic Synthesis of Panoramic Radiographs from Dental Cone Beam Computed Tomography Data.

    PubMed

    Luo, Ting; Shi, Changrong; Zhao, Xing; Zhao, Yunsong; Xu, Jinqiu

    2016-01-01

    In this paper, we propose an automatic method of synthesizing panoramic radiographs from dental cone beam computed tomography (CBCT) data for directly observing the whole dentition without the superimposition of other structures. This method consists of three major steps. First, the dental arch curve is generated from the maximum intensity projection (MIP) of 3D CBCT data. Then, based on this curve, the long axial curves of the upper and lower teeth are extracted to create a 3D panoramic curved surface describing the whole dentition. Finally, the panoramic radiograph is synthesized by developing this 3D surface. Both open-bite shaped and closed-bite shaped dental CBCT datasets were applied in this study, and the resulting images were analyzed to evaluate the effectiveness of this method. With the proposed method, a single-slice panoramic radiograph can clearly and completely show the whole dentition without the blur and superimposition of other dental structures. Moreover, thickened panoramic radiographs can also be synthesized with increased slice thickness to show more features, such as the mandibular nerve canal. One feature of the proposed method is that it is automatically performed without human intervention. Another feature of the proposed method is that it requires thinner panoramic radiographs to show the whole dentition than those produced by other existing methods, which contributes to the clarity of the anatomical structures, including the enamel, dentine and pulp. In addition, this method can rapidly process common dental CBCT data. The speed and image quality of this method make it an attractive option for observing the whole dentition in a clinical setting.

  8. Hypnosis control based on the minimum concentration of anesthetic drug for maintaining appropriate hypnosis.

    PubMed

    Furutani, Eiko; Nishigaki, Yuki; Kanda, Chiaki; Takeda, Toshihiro; Shirakami, Gotaro

    2013-01-01

    This paper proposes a novel hypnosis control method using Auditory Evoked Potential Index (aepEX) as a hypnosis index. In order to avoid side effects of an anesthetic drug, it is desirable to reduce the amount of an anesthetic drug during surgery. For this purpose many studies of hypnosis control systems have been done. Most of them use Bispectral Index (BIS), another hypnosis index, but it has problems of dependence on anesthetic drugs and nonsmooth change near some particular values. On the other hand, aepEX has an ability of clear distinction between patient consciousness and unconsciousness and independence of anesthetic drugs. The control method proposed in this paper consists of two elements: estimating the minimum effect-site concentration for maintaining appropriate hypnosis and adjusting infusion rate of an anesthetic drug, propofol, using model predictive control. The minimum effect-site concentration is estimated utilizing the property of aepEX pharmacodynamics. The infusion rate of propofol is adjusted so that effect-site concentration of propofol may be kept near and always above the minimum effect-site concentration. Simulation results of hypnosis control using the proposed method show that the minimum concentration can be estimated appropriately and that the proposed control method can maintain hypnosis adequately and reduce the total infusion amount of propofol.

  9. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  10. Implementation of time-efficient adaptive sampling function design for improved undersampled MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Choi, Jinhyeok; Kim, Hyeonjin

    2016-12-01

    To improve the efficacy of undersampled MRI, a method of designing adaptive sampling functions is proposed that is simple to implement on an MR scanner and yet effectively improves the performance of the sampling functions. An approximation of the energy distribution of an image (E-map) is estimated from highly undersampled k-space data acquired in a prescan and efficiently recycled in the main scan. An adaptive probability density function (PDF) is generated by combining the E-map with a modeled PDF. A set of candidate sampling functions are then prepared from the adaptive PDF, among which the one with maximum energy is selected as the final sampling function. To validate its computational efficiency, the proposed method was implemented on an MR scanner, and its robust performance in Fourier-transform (FT) MRI and compressed sensing (CS) MRI was tested by simulations and in a cherry tomato. The proposed method consistently outperforms the conventional modeled PDF approach for undersampling ratios of 0.2 or higher in both FT-MRI and CS-MRI. To fully benefit from undersampled MRI, it is preferable that the design of adaptive sampling functions be performed online immediately before the main scan. In this way, the proposed method may further improve the efficacy of the undersampled MRI.

  11. A signal-based fault detection and classification method for heavy haul wagons

    NASA Astrophysics Data System (ADS)

    Li, Chunsheng; Luo, Shihui; Cole, Colin; Spiryagin, Maksym; Sun, Yanquan

    2017-12-01

    This paper proposes a signal-based fault detection and isolation (FDI) system for heavy haul wagons considering the special requirements of low cost and robustness. The sensor network of the proposed system consists of just two accelerometers mounted on the front left and rear right of the carbody. Seven fault indicators (FIs) are proposed based on the cross-correlation analyses of the sensor-collected acceleration signals. Bolster spring fault conditions are focused on in this paper, including two different levels (small faults and moderate faults) and two locations (faults in the left and right bolster springs of the first bogie). A fully detailed dynamic model of a typical 40t axle load heavy haul wagon is developed to evaluate the deterioration of dynamic behaviour under proposed fault conditions and demonstrate the detectability of the proposed FDI method. Even though the fault conditions considered in this paper did not deteriorate the wagon dynamic behaviour dramatically, the proposed FIs show great sensitivity to the bolster spring faults. The most effective and efficient FIs are chosen for fault detection and classification. Analysis results indicate that it is possible to detect changes in bolster stiffness of ±25% and identify the fault location.

  12. A visual model for object detection based on active contours and level-set method.

    PubMed

    Satoh, Shunji

    2006-09-01

    A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.

  13. Exact analytical approach for six-degree-of-freedom measurement using image-orientation-change method.

    PubMed

    Tsai, Chung-Yu

    2012-04-01

    An exact analytical approach is proposed for measuring the six-degree-of-freedom (6-DOF) motion of an object using the image-orientation-change (IOC) method. The proposed measurement system comprises two reflector systems, where each system consists of two reflectors and one position sensing detector (PSD). The IOCs of the object in the two reflector systems are described using merit functions determined from the respective PSD readings before and after motion occurs, respectively. The three rotation variables are then determined analytically from the eigenvectors of the corresponding merit functions. After determining the three rotation variables, the order of the translation equations is downgraded to a linear form. Consequently, the solution for the three translation variables can also be analytically determined. As a result, the motion transformation matrix describing the 6-DOF motion of the object is fully determined. The validity of the proposed approach is demonstrated by means of an illustrative example.

  14. Signal transmission in a human body medium-based body sensor network using a Mach-Zehnder electro-optical sensor.

    PubMed

    Song, Yong; Hao, Qun; Zhang, Kai; Wang, Jingwen; Jin, Xuefeng; Sun, He

    2012-11-30

    The signal transmission technology based on the human body medium offers significant advantages in Body Sensor Networks (BSNs) used for healthcare and the other related fields. In previous works we have proposed a novel signal transmission method based on the human body medium using a Mach-Zehnder electro-optical (EO) sensor. In this paper, we present a signal transmission system based on the proposed method, which consists of a transmitter, a Mach-Zehnder EO sensor and a corresponding receiving circuit. Meanwhile, in order to verify the frequency response properties and determine the suitable parameters of the developed system, in-vivo measurements have been implemented under conditions of different carrier frequencies, baseband frequencies and signal transmission paths. Results indicate that the proposed system will help to achieve reliable and high speed signal transmission of BSN based on the human body medium.

  15. Signal Transmission in a Human Body Medium-Based Body Sensor Network Using a Mach-Zehnder Electro-Optical Sensor

    PubMed Central

    Song, Yong; Hao, Qun; Zhang, Kai; Wang, Jingwen; Jin, Xuefeng; Sun, He

    2012-01-01

    The signal transmission technology based on the human body medium offers significant advantages in Body Sensor Networks (BSNs) used for healthcare and the other related fields. In previous works we have proposed a novel signal transmission method based on the human body medium using a Mach-Zehnder electro-optical (EO) sensor. In this paper, we present a signal transmission system based on the proposed method, which consists of a transmitter, a Mach-Zehnder EO sensor and a corresponding receiving circuit. Meanwhile, in order to verify the frequency response properties and determine the suitable parameters of the developed system, in-vivo measurements have been implemented under conditions of different carrier frequencies, baseband frequencies and signal transmission paths. Results indicate that the proposed system will help to achieve reliable and high speed signal transmission of BSN based on the human body medium. PMID:23443393

  16. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  17. A fusion algorithm for infrared and visible based on guided filtering and phase congruency in NSST domain

    NASA Astrophysics Data System (ADS)

    Liu, Zhanwen; Feng, Yan; Chen, Hang; Jiao, Licheng

    2017-10-01

    A novel and effective image fusion method is proposed for creating a highly informative and smooth surface of fused image through merging visible and infrared images. Firstly, a two-scale non-subsampled shearlet transform (NSST) is employed to decompose the visible and infrared images into detail layers and one base layer. Then, phase congruency is adopted to extract the saliency maps from the detail layers and a guided filtering is proposed to compute the filtering output of base layer and saliency maps. Next, a novel weighted average technique is used to make full use of scene consistency for fusion and obtaining coefficients map. Finally the fusion image was acquired by taking inverse NSST of the fused coefficients map. Experiments show that the proposed approach can achieve better performance than other methods in terms of subjective visual effect and objective assessment.

  18. A Policy Representation Using Weighted Multiple Normal Distribution

    NASA Astrophysics Data System (ADS)

    Kimura, Hajime; Aramaki, Takeshi; Kobayashi, Shigenobu

    In this paper, we challenge to solve a reinforcement learning problem for a 5-linked ring robot within a real-time so that the real-robot can stand up to the trial and error. On this robot, incomplete perception problems are caused from noisy sensors and cheap position-control motor systems. This incomplete perception also causes varying optimum actions with the progress of the learning. To cope with this problem, we adopt an actor-critic method, and we propose a new hierarchical policy representation scheme, that consists of discrete action selection on the top level and continuous action selection on the low level of the hierarchy. The proposed hierarchical scheme accelerates learning on continuous action space, and it can pursue the optimum actions varying with the progress of learning on our robotics problem. This paper compares and discusses several learning algorithms through simulations, and demonstrates the proposed method showing application for the real robot.

  19. Enriching plausible new hypothesis generation in PubMed.

    PubMed

    Baek, Seung Han; Lee, Dahee; Kim, Minjoo; Lee, Jong Ho; Song, Min

    2017-01-01

    Most of earlier studies in the field of literature-based discovery have adopted Swanson's ABC model that links pieces of knowledge entailed in disjoint literatures. However, the issue concerning their practicability remains to be solved since most of them did not deal with the context surrounding the discovered associations and usually not accompanied with clinical confirmation. In this study, we aim to propose a method that expands and elaborates the existing hypothesis by advanced text mining techniques for capturing contexts. We extend ABC model to allow for multiple B terms with various biological types. We were able to concretize a specific, metabolite-related hypothesis with abundant contextual information by using the proposed method. Starting from explaining the relationship between lactosylceramide and arterial stiffness, the hypothesis was extended to suggest a potential pathway consisting of lactosylceramide, nitric oxide, malondialdehyde, and arterial stiffness. The experiment by domain experts showed that it is clinically valid. The proposed method is designed to provide plausible candidates of the concretized hypothesis, which are based on extracted heterogeneous entities and detailed relation information, along with a reliable ranking criterion. Statistical tests collaboratively conducted with biomedical experts provide the validity and practical usefulness of the method unlike previous studies. Applying the proposed method to other cases, it would be helpful for biologists to support the existing hypothesis and easily expect the logical process within it.

  20. Automatic QRS complex detection using two-level convolutional neural network.

    PubMed

    Xiang, Yande; Lin, Zhitao; Meng, Jianyi

    2018-01-29

    The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.

  1. Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.

    PubMed

    Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming

    2016-09-01

    People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.

  2. Dictionary-based fiber orientation estimation with improved spatial consistency.

    PubMed

    Ye, Chuyang; Prince, Jerry L

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that FORNI+ produces FOs with better quality compared with competing methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Combining multi-atlas segmentation with brain surface estimation

    NASA Astrophysics Data System (ADS)

    Huo, Yuankai; Carass, Aaron; Resnick, Susan M.; Pham, Dzung L.; Prince, Jerry L.; Landman, Bennett A.

    2016-03-01

    Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitation in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.

  4. Combining Multi-atlas Segmentation with Brain Surface Estimation.

    PubMed

    Huo, Yuankai; Carass, Aaron; Resnick, Susan M; Pham, Dzung L; Prince, Jerry L; Landman, Bennett A

    2016-02-27

    Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitations in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.

  5. Critical appraisal of fundamental items in approved clinical trial research proposals in Mashhad University of Medical Sciences

    PubMed Central

    Shakeri, Mohammad-Taghi; Taghipour, Ali; Sadeghi, Masoumeh; Nezami, Hossein; Amirabadizadeh, Ali-Reza; Bonakchi, Hossein

    2017-01-01

    Background: Writing, designing, and conducting a clinical trial research proposal has an important role in achieving valid and reliable findings. Thus, this study aimed at critically appraising fundamental information in approved clinical trial research proposals in Mashhad University of Medical Sciences (MUMS) from 2008 to 2014. Methods: This cross-sectional study was conducted on all 935 approved clinical trial research proposals in MUMS from 2008 to 2014. A valid and reliable as well as comprehensive, simple, and usable checklist in sessions with biostatisticians and methodologists, consisting of 11 main items as research tool, were used. Agreement rate between the reviewers of the proposals, who were responsible for data collection, was assessed during 3 sessions, and Kappa statistics was calculated at the last session as 97%. Results: More than 60% of the research proposals had a methodologist consultant, moreover, type of study or study design had been specified in almost all of them (98%). Appropriateness of study aims with hypotheses was not observed in a significant number of research proposals (585 proposals, 62.6%). The required sample size for 66.8% of the approved proposals was based on a sample size formula; however, in 25% of the proposals, sample size formula was not in accordance with the study design. Data collection tool was not selected appropriately in 55.2% of the approved research proposals. Type and method of randomization were unknown in 21% of the proposals and dealing with missing data had not been described in most of them (98%). Inclusion and exclusion criteria were (92%) fully and adequately explained. Moreover, 44% and 31% of the research proposals were moderate and weak in rank, respectively, with respect to the correctness of the statistical analysis methods. Conclusion: Findings of the present study revealed that a large portion of the approved proposals were highly biased or ambiguous with respect to randomization, blinding, dealing with missing data, data collection tool, sampling methods, and statistical analysis. Thus, it is essential to consult and collaborate with a methodologist in all parts of a proposal to control the possible and specific biases in clinical trials. PMID:29445703

  6. Critical appraisal of fundamental items in approved clinical trial research proposals in Mashhad University of Medical Sciences.

    PubMed

    Shakeri, Mohammad-Taghi; Taghipour, Ali; Sadeghi, Masoumeh; Nezami, Hossein; Amirabadizadeh, Ali-Reza; Bonakchi, Hossein

    2017-01-01

    Background: Writing, designing, and conducting a clinical trial research proposal has an important role in achieving valid and reliable findings. Thus, this study aimed at critically appraising fundamental information in approved clinical trial research proposals in Mashhad University of Medical Sciences (MUMS) from 2008 to 2014. Methods: This cross-sectional study was conducted on all 935 approved clinical trial research proposals in MUMS from 2008 to 2014. A valid and reliable as well as comprehensive, simple, and usable checklist in sessions with biostatisticians and methodologists, consisting of 11 main items as research tool, were used. Agreement rate between the reviewers of the proposals, who were responsible for data collection, was assessed during 3 sessions, and Kappa statistics was calculated at the last session as 97%. Results: More than 60% of the research proposals had a methodologist consultant, moreover, type of study or study design had been specified in almost all of them (98%). Appropriateness of study aims with hypotheses was not observed in a significant number of research proposals (585 proposals, 62.6%). The required sample size for 66.8% of the approved proposals was based on a sample size formula; however, in 25% of the proposals, sample size formula was not in accordance with the study design. Data collection tool was not selected appropriately in 55.2% of the approved research proposals. Type and method of randomization were unknown in 21% of the proposals and dealing with missing data had not been described in most of them (98%). Inclusion and exclusion criteria were (92%) fully and adequately explained. Moreover, 44% and 31% of the research proposals were moderate and weak in rank, respectively, with respect to the correctness of the statistical analysis methods. Conclusion: Findings of the present study revealed that a large portion of the approved proposals were highly biased or ambiguous with respect to randomization, blinding, dealing with missing data, data collection tool, sampling methods, and statistical analysis. Thus, it is essential to consult and collaborate with a methodologist in all parts of a proposal to control the possible and specific biases in clinical trials.

  7. A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.

    PubMed

    Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar

    2017-03-01

    The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. A Model for Effective Teaching and Learning in Research Methods.

    ERIC Educational Resources Information Center

    Poindexter, Paula M.

    1998-01-01

    Proposes a teaching model for making research relevant. Presents a case study of the model as used in advertising and public relations research classes. Notes that the model consists of a knowledge base, team process, a realistic goal-oriented experience, self-management, expert consultation, and evaluation and synthesis. Discusses resulting…

  9. Flight-test data on the static fore-and-aft stability of various German airplanes

    NASA Technical Reports Server (NTRS)

    Hubner, Walter

    1933-01-01

    The static longitudinal stability of an airplane with locked elevator is usually determined by analysis and model tests. The present report proposes to supply the results of such measurements. The method consisted of recording the dynamic pressure versus elevator displacement at different center-of-gravity positions in unaccelerated flight.

  10. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies

    ERIC Educational Resources Information Center

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-01-01

    Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…

  11. Comparing Three Patterns of Strengths and Weaknesses Models for the Identification of Specific Learning Disabilities

    ERIC Educational Resources Information Center

    Miller, Daniel C.; Maricle, Denise E.; Jones, Alicia M.

    2016-01-01

    Processing Strengths and Weaknesses (PSW) models have been proposed as a method for identifying specific learning disabilities. Three PSW models were examined for their ability to predict expert identified specific learning disabilities cases. The Dual Discrepancy/Consistency Model (DD/C; Flanagan, Ortiz, & Alfonso, 2013) as operationalized by…

  12. Morphological rational multi-scale algorithm for color contrast enhancement

    NASA Astrophysics Data System (ADS)

    Peregrina-Barreto, Hayde; Terol-Villalobos, Iván R.

    2010-01-01

    Contrast enhancement main goal consists on improving the image visual appearance but also it is used for providing a transformed image in order to segment it. In mathematical morphology several works have been derived from the framework theory for contrast enhancement proposed by Meyer and Serra. However, when working with images with a wide range of scene brightness, as for example when strong highlights and deep shadows appear in the same image, the proposed morphological methods do not allow the enhancement. In this work, a rational multi-scale method, which uses a class of morphological connected filters called filters by reconstruction, is proposed. Granulometry is used by finding the more accurate scales for filters and with the aim of avoiding the use of other little significant scales. The CIE-u'v'Y' space was used to introduce our results since it takes into account the Weber's Law and by avoiding the creation of new colors it permits to modify the luminance values without affecting the hue. The luminance component ('Y) is enhanced separately using the proposed method, next it is used for enhancing the chromatic components (u', v') by means of the center of gravity law of color mixing.

  13. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Testing a single regression coefficient in high dimensional linear models

    PubMed Central

    Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2017-01-01

    In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668

  15. Testing a single regression coefficient in high dimensional linear models.

    PubMed

    Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2016-11-01

    In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.

  16. A KPI-based process monitoring and fault detection framework for large-scale processes.

    PubMed

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang

    2017-05-01

    Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Structured-Light Based 3d Laser Scanning of Semi-Submerged Structures

    NASA Astrophysics Data System (ADS)

    van der Lucht, J.; Bleier, M.; Leutert, F.; Schilling, K.; Nüchter, A.

    2018-05-01

    In this work we look at 3D acquisition of semi-submerged structures with a triangulation based underwater laser scanning system. The motivation is that we want to simultaneously capture data above and below water to create a consistent model without any gaps. The employed structured light scanner consist of a machine vision camera and a green line laser. In order to reconstruct precise surface models of the object it is necessary to model and correct for the refraction of the laser line and camera rays at the water-air boundary. We derive a geometric model for the refraction at the air-water interface and propose a method for correcting the scans. Furthermore, we show how the water surface is directly estimated from sensor data. The approach is verified using scans captured with an industrial manipulator to achieve reproducible scanner trajectories with different incident angles. We show that the proposed method is effective for refractive correction and that it can be applied directly to the raw sensor data without requiring any external markers or targets.

  18. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations

    PubMed Central

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J. Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types. PMID:27472383

  19. Extension of specification language for soundness and completeness of service workflow

    NASA Astrophysics Data System (ADS)

    Viriyasitavat, Wattana; Xu, Li Da; Bi, Zhuming; Sapsomboon, Assadaporn

    2018-05-01

    A Service Workflow is an aggregation of distributed services to fulfill specific functionalities. With ever increasing available services, the methodologies for the selections of the services against the given requirements become main research subjects in multiple disciplines. A few of researchers have contributed to the formal specification languages and the methods for model checking; however, existing methods have the difficulties to tackle with the complexity of workflow compositions. In this paper, we propose to formalize the specification language to reduce the complexity of the workflow composition. To this end, we extend a specification language with the consideration of formal logic, so that some effective theorems can be derived for the verification of syntax, semantics, and inference rules in the workflow composition. The logic-based approach automates compliance checking effectively. The Service Workflow Specification (SWSpec) has been extended and formulated, and the soundness, completeness, and consistency of SWSpec applications have been verified; note that a logic-based SWSpec is mandatory for the development of model checking. The application of the proposed SWSpec has been demonstrated by the examples with the addressed soundness, completeness, and consistency.

  20. Interictal Epileptiform Discharges (IEDs) classification in EEG data of epilepsy patients

    NASA Astrophysics Data System (ADS)

    Puspita, J. W.; Soemarno, G.; Jaya, A. I.; Soewono, E.

    2017-12-01

    Interictal Epileptiform Dischargers (IEDs), which consists of spike waves and sharp waves, in human electroencephalogram (EEG) are characteristic signatures of epilepsy. Spike waves are characterized by a pointed peak with a duration of 20-70 ms, while sharp waves has a duration of 70-200 ms. The purpose of the study was to classify spike wave and sharp wave of EEG data of epilepsy patients using Backpropagation Neural Network. The proposed method consists of two main stages: feature extraction stage and classification stage. In the feature extraction stage, we use frequency, amplitude and statistical feature, such as mean, standard deviation, and median, of each wave. The frequency values of the IEDs are very sensitive to the selection of the wave baseline. The selected baseline must contain all data of rising and falling slopes of the IEDs. Thus, we have a feature that is able to represent the type of IEDs, appropriately. The results show that the proposed method achieves the best classification results with the recognition rate of 93.75 % for binary sigmoid activation function and learning rate of 0.1.

Top