Sample records for proposed method eliminates

  1. Adaptive noise canceling of electrocardiogram artifacts in single channel electroencephalogram.

    PubMed

    Cho, Sung Pil; Song, Mi Hye; Park, Young Cheol; Choi, Ho Seon; Lee, Kyoung Joung

    2007-01-01

    A new method for estimating and eliminating electrocardiogram (ECG) artifacts from single channel scalp electroencephalogram (EEG) is proposed. The proposed method consists of emphasis of QRS complex from EEG using least squares acceleration (LSA) filter, generation of synchronized pulse with R-peak and ECG artifacts estimation and elimination using adaptive filter. The performance of the proposed method was evaluated using simulated and real EEG recordings, we found that the ECG artifacts were successfully estimated and eliminated in comparison with the conventional multi-channel techniques, which are independent component analysis (ICA) and ensemble average (EA) method. From this we can conclude that the proposed method is useful for the detecting and eliminating the ECG artifacts from single channel EEG and simple to use for ambulatory/portable EEG monitoring system.

  2. Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters

    NASA Astrophysics Data System (ADS)

    Vasumathi, B.; Moorthi, S.

    2011-11-01

    In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.

  3. An Elimination Method of Temperature-Induced Linear Birefringence in a Stray Current Sensor

    PubMed Central

    Xu, Shaoyi; Li, Wei; Xing, Fangfang; Wang, Yuqiao; Wang, Ruilin; Wang, Xianghui

    2017-01-01

    In this work, an elimination method of the temperature-induced linear birefringence (TILB) in a stray current sensor is proposed using the cylindrical spiral fiber (CSF), which produces a large amount of circular birefringence to eliminate the TILB based on geometric rotation effect. First, the differential equations that indicate the polarization evolution of the CSF element are derived, and the output error model is built based on the Jones matrix calculus. Then, an accurate search method is proposed to obtain the key parameters of the CSF, including the length of the cylindrical silica rod and the number of the curve spirals. The optimized results are 302 mm and 11, respectively. Moreover, an effective factor is proposed to analyze the elimination of the TILB, which should be greater than 7.42 to achieve the output error requirement that is not greater than 0.5%. Finally, temperature experiments are conducted to verify the feasibility of the elimination method. The results indicate that the output error caused by the TILB can be controlled less than 0.43% based on this elimination method within the range from −20 °C to 40 °C. PMID:28282953

  4. Corner detection and sorting method based on improved Harris algorithm in camera calibration

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang

    2016-11-01

    In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.

  5. Design and control of the phase current of a brushless dc motor to eliminate cogging torque

    NASA Astrophysics Data System (ADS)

    Jang, G. H.; Lee, C. J.

    2006-04-01

    This paper presents a design and control method of the phase current to reduce the torque ripple of a brushless dc (BLDC) motor by eliminating cogging torque. The cogging torque is the main source of torque ripple and consequently of speed error, and it is also the excitation source to generate the vibration and noise of a motor. This research proposes a modified current wave form, which is composed of main and auxiliary currents. The former is the conventional current to generate the commutating torque. The latter generates the torque with the same magnitude and opposite sign of the corresponding cogging torque at the given position in order to eliminate the cogging torque. Time-stepping finite element method simulation considering pulse-width-modulation switching method has been performed to verify the effectiveness of the proposed method, and it shows that this proposed method reduces torque ripple by 36%. A digital-signal-processor-based controller is also developed to implement the proposed method, and it shows that this proposed method reduces the speed ripple significantly.

  6. Improved atmospheric effect elimination method for the roughness estimation of painted surfaces.

    PubMed

    Zhang, Ying; Xuan, Jiabin; Zhao, Huijie; Song, Ping; Zhang, Yi; Xu, Wujian

    2018-03-01

    We propose a method for eliminating the atmospheric effect in polarimetric imaging remote sensing by using polarimetric imagers to simultaneously detect ground targets and skylight, which does not need calibrated targets. In addition, calculation efficiencies are improved by the skylight division method without losing estimation accuracy. Outdoor experiments are performed to obtain the polarimetric bidirectional reflectance distribution functions of painted surfaces and skylight under different weather conditions. Finally, the roughness of the painted surfaces is estimated. We find that the estimation accuracy with the proposed method is 6% on cloudy weather, while it is 30.72% without atmospheric effect elimination.

  7. δ-Similar Elimination to Enhance Search Performance of Multiobjective Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Aguirre, Hernán; Sato, Masahiko; Tanaka, Kiyoshi

    In this paper, we propose δ-similar elimination to improve the search performance of multiobjective evolutionary algorithms in combinatorial optimization problems. This method eliminates similar individuals in objective space to fairly distribute selection among the different regions of the instantaneous Pareto front. We investigate four eliminating methods analyzing their effects using NSGA-II. In addition, we compare the search performance of NSGA-II enhanced by our method and NSGA-II enhanced by controlled elitism.

  8. A non-iterative twin image elimination method with two in-line digital holograms

    NASA Astrophysics Data System (ADS)

    Kim, Jongwu; Lee, Heejung; Jeon, Philjun; Kim, Dug Young

    2018-02-01

    We propose a simple non-iterative in-line holographic measurement method which can effectively eliminate a twin image in digital holographic 3D imaging. It is shown that a twin image can be effectively eliminated with only two measured holograms by using a simple numerical propagation algorithm and arithmetic calculations.

  9. Low Cost Design of an Advanced Encryption Standard (AES) Processor Using a New Common-Subexpression-Elimination Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Chih; Hsiao, Shen-Fu

    In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.

  10. Gaussian Elimination-Based Novel Canonical Correlation Analysis Method for EEG Motion Artifact Removal.

    PubMed

    Roy, Vandana; Shukla, Shailja; Shukla, Piyush Kumar; Rawat, Paresh

    2017-01-01

    The motion generated at the capturing time of electro-encephalography (EEG) signal leads to the artifacts, which may reduce the quality of obtained information. Existing artifact removal methods use canonical correlation analysis (CCA) for removing artifacts along with ensemble empirical mode decomposition (EEMD) and wavelet transform (WT). A new approach is proposed to further analyse and improve the filtering performance and reduce the filter computation time under highly noisy environment. This new approach of CCA is based on Gaussian elimination method which is used for calculating the correlation coefficients using backslash operation and is designed for EEG signal motion artifact removal. Gaussian elimination is used for solving linear equation to calculate Eigen values which reduces the computation cost of the CCA method. This novel proposed method is tested against currently available artifact removal techniques using EEMD-CCA and wavelet transform. The performance is tested on synthetic and real EEG signal data. The proposed artifact removal technique is evaluated using efficiency matrices such as del signal to noise ratio (DSNR), lambda ( λ ), root mean square error (RMSE), elapsed time, and ROC parameters. The results indicate suitablity of the proposed algorithm for use as a supplement to algorithms currently in use.

  11. Speech Enhancement of Mobile Devices Based on the Integration of a Dual Microphone Array and a Background Noise Elimination Algorithm.

    PubMed

    Chen, Yung-Yue

    2018-05-08

    Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.

  12. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-09-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  13. Elimination of RF inhomogeneity effects in segmentation.

    PubMed

    Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay

    2007-01-01

    There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.

  14. Structural health monitoring using DOG multi-scale space: an approach for analyzing damage characteristics

    NASA Astrophysics Data System (ADS)

    Guo, Tian; Xu, Zili

    2018-03-01

    Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.

  15. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  16. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  17. Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features.

    PubMed

    Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate

    2017-08-01

    Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.

  18. Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features

    NASA Astrophysics Data System (ADS)

    Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate

    2017-08-01

    Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.

  19. Removal of power line interference of space bearing vibration signal based on the morphological filter and blind source separation

    NASA Astrophysics Data System (ADS)

    Dong, Shaojiang; Sun, Dihua; Xu, Xiangyang; Tang, Baoping

    2017-06-01

    Aiming at the problem that it is difficult to extract the feature information from the space bearing vibration signal because of different noise, for example the running trend information, high-frequency noise and especially the existence of lot of power line interference (50Hz) and its octave ingredients of the running space simulated equipment in the ground. This article proposed a combination method to eliminate them. Firstly, the EMD is used to remove the running trend item information of the signal, the running trend that affect the signal processing accuracy is eliminated. Then the morphological filter is used to eliminate high-frequency noise. Finally, the components and characteristics of the power line interference are researched, based on the characteristics of the interference, the revised blind source separation model is used to remove the power line interferences. Through analysis of simulation and practical application, results suggest that the proposed method can effectively eliminate those noise.

  20. A consensus reaching model for 2-tuple linguistic multiple attribute group decision making with incomplete weight information

    NASA Astrophysics Data System (ADS)

    Zhang, Wancheng; Xu, Yejun; Wang, Huimin

    2016-01-01

    The aim of this paper is to put forward a consensus reaching method for multi-attribute group decision-making (MAGDM) problems with linguistic information, in which the weight information of experts and attributes is unknown. First, some basic concepts and operational laws of 2-tuple linguistic label are introduced. Then, a grey relational analysis method and a maximising deviation method are proposed to calculate the incomplete weight information of experts and attributes respectively. To eliminate the conflict in the group, a weight-updating model is employed to derive the weights of experts based on their contribution to the consensus reaching process. After conflict elimination, the final group preference can be obtained which will give the ranking of the alternatives. The model can effectively avoid information distortion which is occurred regularly in the linguistic information processing. Finally, an illustrative example is given to illustrate the application of the proposed method and comparative analysis with the existing methods are offered to show the advantages of the proposed method.

  1. Development of Generation System of Simplified Digital Maps

    NASA Astrophysics Data System (ADS)

    Uchimura, Keiichi; Kawano, Masato; Tokitsu, Hiroki; Hu, Zhencheng

    In recent years, digital maps have been used in a variety of scenarios, including car navigation systems and map information services over the Internet. These digital maps are formed by multiple layers of maps of different scales; the map data most suitable for the specific situation are used. Currently, the production of map data of different scales is done by hand due to constraints related to processing time and accuracy. We conducted research concerning technologies for automatic generation of simplified map data from detailed map data. In the present paper, the authors propose the following: (1) a method to transform data related to streets, rivers, etc. containing widths into line data, (2) a method to eliminate the component points of the data, and (3) a method to eliminate data that lie below a certain threshold. In addition, in order to evaluate the proposed method, a user survey was conducted; in this survey we compared maps generated using the proposed method with the commercially available maps. From the viewpoint of the amount of data reduction and processing time, and on the basis of the results of the survey, we confirmed the effectiveness of the automatic generation of simplified maps using the proposed methods.

  2. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teramoto, Atsushi, E-mail: teramoto@fujita-hu.ac.jp; Fujita, Hiroshi; Yamamuro, Osamu

    Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using anmore » active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules using PET/CT images.« less

  3. Selective harmonic elimination strategy in eleven level inverter for PV system with unbalanced DC sources

    NASA Astrophysics Data System (ADS)

    Ghoudelbourk, Sihem.; Dib, D.; Meghni, B.; Zouli, M.

    2017-02-01

    The paper deals with the multilevel converters control strategy for photovoltaic system integrated in distribution grids. The objective of the proposed work is to design multilevel inverters for solar energy applications so as to reduce the Total Harmonic Distortion (THD) and to improve the power quality. The multilevel inverter power structure plays a vital role in every aspect of the power system. It is easier to produce a high-power, high-voltage inverter with the multilevel structure. The topologies of multilevel inverter have several advantages such as high output voltage, lower total harmonic distortion (THD) and reduction of voltage ratings of the power semiconductor switching devices. The proposed control strategy ensures an implementation of selective harmonic elimination (SHE) modulation for eleven levels. SHE is a very important and efficient strategy of eliminating selected harmonics by judicious selection of the firing angles of the inverter. Harmonics elimination technique eliminates the need of the expensive low pass filters in the system. Previous research considered that constant and equal DC sources with invariant behavior; however, this research extends earlier work to include variant DC sources, which are typical of lead-acid batteries when used in system PV. This Study also investigates methods to minimize the total harmonic distortion of the synthesized multilevel waveform and to help balance the battery voltage. The harmonic elimination method was used to eliminate selected lower dominant harmonics resulting from the inverter switching action.

  4. Performance of Four-Leg VSC based DSTATCOM using Single Phase P-Q Theory

    NASA Astrophysics Data System (ADS)

    Jampana, Bangarraju; Veramalla, Rajagopal; Askani, Jayalaxmi

    2017-02-01

    This paper presents single-phase P-Q theory for four-leg VSC based distributed static compensator (DSTATCOM) in the distribution system. The proposed DSTATCOM maintains unity power factor at source, zero voltage regulation, eliminates current harmonics, load balancing and neutral current compensation. The advantage of using four-leg VSC based DSTATCOM is to eliminate isolated/non-isolated transformer connection at point of common coupling (PCC) for neutral current compensation. The elimination of transformer connection at PCC with proposed topology will reduce cost of DSTATCOM. The single-phase P-Q theory control algorithm is used to extract fundamental component of active and reactive currents for generation of reference source currents which is based on indirect current control method. The proposed DSTATCOM is modelled and the results are validated with various consumer loads under unity power factor and zero voltage regulation modes in the MATLAB R2013a environment using simpower system toolbox.

  5. Substructure analysis techniques and automation. [to eliminate logistical data handling and generation chores

    NASA Technical Reports Server (NTRS)

    Hennrich, C. W.; Konrath, E. J., Jr.

    1973-01-01

    A basic automated substructure analysis capability for NASTRAN is presented which eliminates most of the logistical data handling and generation chores that are currently associated with the method. Rigid formats are proposed which will accomplish this using three new modules, all of which can be added to level 16 with a relatively small effort.

  6. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    NASA Astrophysics Data System (ADS)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  7. A Real-Time Thermal Self-Elimination Method for Static Mode Operated Freestanding Piezoresistive Microcantilever-Based Biosensors.

    PubMed

    Ku, Yu-Fu; Huang, Long-Sun; Yen, Yi-Kuang

    2018-02-28

    Here, we provide a method and apparatus for real-time compensation of the thermal effect of single free-standing piezoresistive microcantilever-based biosensors. The sensor chip contained an on-chip fixed piezoresistor that served as a temperature sensor, and a multilayer microcantilever with an embedded piezoresistor served as a biomolecular sensor. This method employed the calibrated relationship between the resistance and the temperature of piezoresistors to eliminate the thermal effect on the sensor, including the temperature coefficient of resistance (TCR) and bimorph effect. From experimental results, the method was verified to reduce the signal of thermal effect from 25.6 μV/°C to 0.3 μV/°C, which was approximately two orders of magnitude less than that before the processing of the thermal elimination method. Furthermore, the proposed approach and system successfully demonstrated its effective real-time thermal self-elimination on biomolecular detection without any thermostat device to control the environmental temperature. This method realizes the miniaturization of an overall measurement system of the sensor, which can be used to develop portable medical devices and microarray analysis platforms.

  8. Bubble Entropy: An Entropy Almost Free of Parameters.

    PubMed

    Manis, George; Aktaruzzaman, Md; Sassi, Roberto

    2017-11-01

    Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors. Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.

  9. Seamless image stitching by homography refinement and structure deformation using optimal seam pair detection

    NASA Astrophysics Data System (ADS)

    Lee, Daeho; Lee, Seohyung

    2017-11-01

    We propose an image stitching method that can remove ghost effects and realign the structure misalignments that occur in common image stitching methods. To reduce the artifacts caused by different parallaxes, an optimal seam pair is selected by comparing the cross correlations from multiple seams detected by variable cost weights. Along the optimal seam pair, a histogram of oriented gradients is calculated, and feature points for matching are detected. The homography is refined using the matching points, and the remaining misalignment is eliminated using the propagation of deformation vectors calculated from matching points. In multiband blending, the overlapping regions are determined from a distance between the matching points to remove overlapping artifacts. The experimental results show that the proposed method more robustly eliminates misalignments and overlapping artifacts than the existing method that uses single seam detection and gradient features.

  10. A method to eliminate the influence of incident light variations in spectral analysis

    NASA Astrophysics Data System (ADS)

    Luo, Yongshun; Li, Gang; Fu, Zhigang; Guan, Yang; Zhang, Shengzhao; Lin, Ling

    2018-06-01

    The intensity of the light source and consistency of the spectrum are the most important factors influencing the accuracy in quantitative spectrometric analysis. An efficient "measuring in layer" method was proposed in this paper to limit the influence of inconsistencies in the intensity and spectrum of the light source. In order to verify the effectiveness of this method, a light source with a variable intensity and spectrum was designed according to Planck's law and Wien's displacement law. Intra-lipid samples with 12 different concentrations were prepared and divided into modeling sets and prediction sets according to different incident lights and solution concentrations. The spectra of each sample were measured with five different light intensities. The experimental results showed that the proposed method was effective in eliminating the influence caused by incident light changes and was more effective than normalized processing.

  11. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-01

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.

  12. Inhomogeneity compensation for MR brain image segmentation using a multi-stage FCM-based approach.

    PubMed

    Szilágyi, László; Szilágyi, Sándor M; Dávid, László; Benyó, Zoltán

    2008-01-01

    Intensity inhomogeneity or intensity non-uniformity (INU) is an undesired phenomenon that represents the main obstacle for MR image segmentation and registration methods. Various techniques have been proposed to eliminate or compensate the INU, most of which are embedded into clustering algorithms. This paper proposes a multiple stage fuzzy c-means (FCM) based algorithm for the estimation and compensation of the slowly varying additive or multiplicative noise, supported by a pre-filtering technique for Gaussian and impulse noise elimination. The slowly varying behavior of the bias or gain field is assured by a smoothening filter that performs a context dependent averaging, based on a morphological criterion. The experiments using 2-D synthetic phantoms and real MR images show, that the proposed method provides accurate segmentation. The produced segmentation and fuzzy membership values can serve as excellent support for 3-D registration and segmentation techniques.

  13. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  14. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  15. Ekistics, the Science of Human Settlements

    ERIC Educational Resources Information Center

    Doxiadis, Constantinos A.

    1970-01-01

    Presents the science of ekistics for systematic analysis of historical, contemporary, and proposed human settlements varying from individual to Ecumenopolis size. Five principles are described as applicable to all cities and applied to growing urban developments. Isolation of Dimensions and Elimination of Alternatives (IDEA method) is proposed for…

  16. False match elimination for face recognition based on SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Xuyuan; Shi, Ping; Shao, Meide

    2011-06-01

    The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.

  17. Elimination of leukemic cells from human transplants by laser nano-thermolysis

    NASA Astrophysics Data System (ADS)

    Lapotko, Dmitri; Lukianova, Ekaterina; Potapnev, Michail; Aleinikova, Olga; Oraevsky, Alexander

    2006-02-01

    We describe novel ex vivo method for elimination of tumor cells from bone marrow and blood, Laser Activated Nano-Thermolysis for Cell Elimination Technology (LANTCET) and propose this method for purging of transplants during treatment of leukemia. Human leukemic cells derived from real patients with different diagnoses (acute lymphoblastic leukemias) were selectively damaged by LANTCET in the experiments by laser-induced micro-bubbles that emerge inside individual specifically-targeted cells around the clusters of light-absorbing gold nanoparticles. Pretreatment of the transplants with diagnosis-specific primary monoclonal antibodies and gold nano-particles allowed the formation of nanoparticle clusters inside leukemic cells only. Electron microscopy found the nanoparticulate clusters inside the cells. Total (99.9%) elimination of leukemic cells targeted with specific antibodies and nanoparticles was achieved with single 10-ns laser pulses with optical fluence of 0.2 - 1.0 J/cm2 at the wavelength of 532 nm without significant damage to normal bone marrow cells in the same transplant. All cells were studied for the damage/viability with several control methods after their irradiation by laser pulses. Presented results have proved potential applicability of developed LANTCET technology for efficient and safe purging (cleaning of residual tumor cells) of human bone marrow and blood transplants. Design of extra-corporeal system was proposed that can process the transplant for one patient for less than an hour with parallel detection and counting residual leukemic cells.

  18. A simple method to eliminate shielding currents for magnetization perpendicular to superconducting tapes wound into coils

    NASA Astrophysics Data System (ADS)

    Kajikawa, Kazuhiro; Funaki, Kazuo

    2011-12-01

    Application of an external AC magnetic field parallel to superconducting tapes helps in eliminating the magnetization caused by the shielding current induced in the flat faces of the tapes. This method helps in realizing a magnet system with high-temperature superconducting tapes for magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR) applications. The effectiveness of the proposed method is validated by numerical calculations carried out using the finite-element method and experiments performed using a commercially available superconducting tape. The field uniformity for a single-layer solenoid coil after the application of an AC field is also estimated by a theoretical consideration.

  19. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    PubMed Central

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-01-01

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197

  20. A Real-Time Thermal Self-Elimination Method for Static Mode Operated Freestanding Piezoresistive Microcantilever-Based Biosensors

    PubMed Central

    Ku, Yu-Fu; Huang, Long-Sun

    2018-01-01

    Here, we provide a method and apparatus for real-time compensation of the thermal effect of single free-standing piezoresistive microcantilever-based biosensors. The sensor chip contained an on-chip fixed piezoresistor that served as a temperature sensor, and a multilayer microcantilever with an embedded piezoresistor served as a biomolecular sensor. This method employed the calibrated relationship between the resistance and the temperature of piezoresistors to eliminate the thermal effect on the sensor, including the temperature coefficient of resistance (TCR) and bimorph effect. From experimental results, the method was verified to reduce the signal of thermal effect from 25.6 μV/°C to 0.3 μV/°C, which was approximately two orders of magnitude less than that before the processing of the thermal elimination method. Furthermore, the proposed approach and system successfully demonstrated its effective real-time thermal self-elimination on biomolecular detection without any thermostat device to control the environmental temperature. This method realizes the miniaturization of an overall measurement system of the sensor, which can be used to develop portable medical devices and microarray analysis platforms. PMID:29495574

  1. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data.

    PubMed

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-05

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Numerical modeling of the radiative transfer in a turbid medium using the synthetic iteration.

    PubMed

    Budak, Vladimir P; Kaloshin, Gennady A; Shagalov, Oleg V; Zheltov, Victor S

    2015-07-27

    In this paper we propose the fast, but the accurate algorithm for numerical modeling of light fields in the turbid media slab. For the numerical solution of the radiative transfer equation (RTE) it is required its discretization based on the elimination of the solution anisotropic part and the replacement of the scattering integral by a finite sum. The solution regular part is determined numerically. A good choice of the method of the solution anisotropic part elimination determines the high convergence of the algorithm in the mean square metric. The method of synthetic iterations can be used to improve the convergence in the uniform metric. A significant increase in the solution accuracy with the use of synthetic iterations allows applying the two-stream approximation for the regular part determination. This approach permits to generalize the proposed method in the case of an arbitrary 3D geometry of the medium.

  3. Broadband photonic transport between waveguides by adiabatic elimination

    NASA Astrophysics Data System (ADS)

    Oukraou, Hassan; Coda, Virginie; Rangelov, Andon A.; Montemezzani, Germano

    2018-02-01

    We propose an adiabatic method for the robust transfer of light between the two outer waveguides in a three-waveguide directional coupler. Unlike the established technique inherited from stimulated Raman adiabatic passage (STIRAP), the method proposed here is symmetric with respect to an exchange of the left and right waveguides in the structure and permits the transfer in both directions. The technique uses the adiabatic elimination of the middle waveguide together with level crossing and adiabatic passage in an effective two-state system involving only the external waveguides. It requires a strong detuning between the outer and the middle waveguide and does not rely on the adiabatic transfer state (dark state) underlying the STIRAP process. The suggested technique is generalized to an array of N waveguides and verified by numerical beam propagation calculations.

  4. New Multigrid Method Including Elimination Algolithm Based on High-Order Vector Finite Elements in Three Dimensional Magnetostatic Field Analysis

    NASA Astrophysics Data System (ADS)

    Hano, Mitsuo; Hotta, Masashi

    A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

  5. Calibration-independent measurement of complex permittivity of liquids using a coaxial transmission line

    NASA Astrophysics Data System (ADS)

    Guoxin, Cheng

    2015-01-01

    In recent years, several calibration-independent transmission/reflection methods have been developed to determine the complex permittivity of liquid materials. However, these methods experience their own respective defects, such as the requirement of multi measurement cells, or the presence of air gap effect. To eliminate these drawbacks, a fast calibration-independent method is proposed in this paper. There are two main advantages of the present method over those in the literature. First, only one measurement cell is required. The cell is measured when it is empty and when it is filled with liquid. This avoids the air gap effect in the approach, in which the structure with two reference ports connected with each other is needed to be measured. Second, it eliminates the effects of uncalibrated coaxial cables, adaptors, and plug sections; systematic errors caused by the experimental setup are avoided by the wave cascading matrix manipulations. Using this method, three dielectric reference liquids, i.e., ethanol, ethanediol, and pure water, and low-loss transformer oil are measured over a wide frequency range to validate the proposed method. Their accuracy is assessed by comparing the results with those obtained from the other well known techniques. It is demonstrated that this proposed method can be used as a robust approach for fast complex permittivity determination of liquid materials.

  6. Eliminating the influence of source spectrum of white light scanning interferometry through time-delay estimation algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yunfei; Cai, Hongzhi; Zhong, Liyun; Qiu, Xiang; Tian, Jindong; Lu, Xiaoxu

    2017-05-01

    In white light scanning interferometry (WLSI), the accuracy of profile measurement achieved with the conventional zero optical path difference (ZOPD) position locating method is closely related with the shape of interference signal envelope (ISE), which is mainly decided by the spectral distribution of illumination source. For a broadband light with Gaussian spectral distribution, the corresponding shape of ISE reveals a symmetric distribution, so the accurate ZOPD position can be achieved easily. However, if the spectral distribution of source is irregular, the shape of ISE will become asymmetric or complex multi-peak distribution, WLSI cannot work well through using ZOPD position locating method. Aiming at this problem, we propose time-delay estimation (TDE) based WLSI method, in which the surface profile information is achieved by using the relative displacement of interference signal between different pixels instead of the conventional ZOPD position locating method. Due to all spectral information of interference signal (envelope and phase) are utilized, in addition to revealing the advantage of high accuracy, the proposed method can achieve profile measurement with high accuracy in the case that the shape of ISE is irregular while ZOPD position locating method cannot work. That is to say, the proposed method can effectively eliminate the influence of source spectrum.

  7. Safety in the Chemical Laboratory--Chemical Management: A Method for Waste Reduction.

    ERIC Educational Resources Information Center

    Pine, Stanley H.

    1984-01-01

    Discusses methods for reducing or eliminating waste disposal problems in the chemistry laboratory, considering both economic and environmental aspects of the problems. Proposes inventory control, shared use, solvent recycling, zero effluent, and various means of disposing of chemicals. (JM)

  8. 77 FR 16309 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-20

    ... Proposed Rule Change To Eliminate the 100MB Connectivity Option and Fee March 14, 2012. Pursuant to Section... Proposed Rule Change The Exchange proposes to eliminate 100MB connectivity between the Exchange and co... Basis for, the Proposed Rule Change 1. Purpose The Exchange proposes to modify Rule 7034(b) to eliminate...

  9. Generalized Factorial Moments

    NASA Astrophysics Data System (ADS)

    Bialas, A.

    2004-02-01

    It is shown that the method of eliminating the statistical fluctuations from event-by-event analysis proposed recently by Fu and Liu can be rewritten in a compact form involving the generalized factorial moments.

  10. 78 FR 38755 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-27

    ... Proposed Rule Change To Eliminate an Erroneous Reference to the Retired Automatic Quotation Refresh...'s Statement of the Terms of Substance of the Proposed Rule Change The Exchange proposes to eliminate...,'' which references the AQR functionality that was retired. Accordingly, NASDAQ is proposing to eliminate...

  11. 78 FR 2306 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-10

    ... Proposed Rule Change Eliminating Certain Credits Within the New York Stock Exchange LLC Price List January... eliminate certain credits within its Price List, which the Exchange proposes to become operative on January..., the Proposed Rule Change 1. Purpose The Exchange proposes to eliminate certain credits within its...

  12. Multiview point clouds denoising based on interference elimination

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  13. Optical Profilometers Using Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Hall, Gregory A.; Youngquist, Robert; Mikhael, Wasfy

    2006-01-01

    A method of adaptive signal processing has been proposed as the basis of a new generation of interferometric optical profilometers for measuring surfaces. The proposed profilometers would be portable, hand-held units. Sizes could be thus reduced because the adaptive-signal-processing method would make it possible to substitute lower-power coherent light sources (e.g., laser diodes) for white light sources and would eliminate the need for most of the optical components of current white-light profilometers. The adaptive-signal-processing method would make it possible to attain scanning ranges of the order of decimeters in the proposed profilometers.

  14. 78 FR 20967 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-08

    ... Proposed Rule Change To Eliminate a Fee for Use of FIX and OUCH Trading Ports for Testing April 2, 2013... of the Proposed Rule Change NASDAQ proposes to eliminate fees under Rules 7015(b) and (g), which are..., the Proposed Rule Change 1. Purpose NASDAQ is proposing to amend Rules 7015(b) and (g) to eliminate...

  15. 75 FR 70328 - Self-Regulatory Organizations; Fixed Income Clearing Corporation; Notice of Filing of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-17

    ... Eliminate Certain Cash Adjustments Currently Processed by the MBSD November 10, 2010. Pursuant to Section 19... Change The purpose of the proposed rule change is to eliminate cash adjustments that are currently... Purpose of, and Statutory Basis for, the Proposed Rule Change FICC is proposing to eliminate the cash...

  16. 75 FR 10541 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-08

    ... Proposed Rule Change To Eliminate Erroneous Citations From Rule 9557 March 1, 2010. Pursuant to Section 19... Change Nasdaq is proposing to eliminate erroneous citations found under Rule 9557. The text of the... Purpose of, and Statutory Basis for, the Proposed Rule Change 1. Purpose Nasdaq is proposing to eliminate...

  17. 78 FR 52589 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-23

    ... Change To Eliminate EDGX Rule 13.4 August 19, 2013. Pursuant to Section 19(b)(1) of the Securities... the Proposed Rule Change The Exchange proposes to eliminate Rule 13.4, ``Assigning of Registered... Proposed Rule Change 1. Purpose The Exchange proposes to eliminate Rule 13.4, ``Assigning of Registered...

  18. 78 FR 52596 - Self-Regulatory Organizations; EDGA Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-23

    ... Change To Eliminate EDGA Rule 13.4 August 19, 2013. Pursuant to Section 19(b)(1) of the Securities... the Proposed Rule Change The Exchange proposes to eliminate Rule 13.4, ``Assigning of Registered... Proposed Rule Change 1. Purpose The Exchange proposes to eliminate Rule 13.4, ``Assigning of Registered...

  19. A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.

    PubMed

    Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin

    2015-09-16

    In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.

  20. An efficient background modeling approach based on vehicle detection

    NASA Astrophysics Data System (ADS)

    Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua

    2015-10-01

    The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.

  1. Analysis of collapse in flattening a micro-grooved heat pipe by lateral compression

    NASA Astrophysics Data System (ADS)

    Li, Yong; He, Ting; Zeng, Zhixin

    2012-11-01

    The collapse of thin-walled micro-grooved heat pipes is a common phenomenon in the tube flattening process, which seriously influences the heat transfer performance and appearance of heat pipe. At present, there is no other better method to solve this problem. A new method by heating the heat pipe is proposed to eliminate the collapse during the flattening process. The effectiveness of the proposed method is investigated through a theoretical model, a finite element(FE) analysis, and experimental method. Firstly, A theoretical model based on a deformation model of six plastic hinges and the Antoine equation of the working fluid is established to analyze the collapse of thin walls at different temperatures. Then, the FE simulation and experiments of flattening process at different temperatures are carried out and compared with theoretical model. Finally, the FE model is followed to study the loads of the plates at different temperatures and heights of flattened heat pipes. The results of the theoretical model conform to those of the FE simulation and experiments in the flattened zone. The collapse occurs at room temperature. As the temperature increases, the collapse decreases and finally disappears at approximately 130 °C for various heights of flattened heat pipes. The loads of the moving plate increase as the temperature increases. Thus, the reasonable temperature for eliminating the collapse and reducing the load is approximately 130 °C. The advantage of the proposed method is that the collapse is reduced or eliminated by means of the thermal deformation characteristic of heat pipe itself instead of by external support. As a result, the heat transfer efficiency of heat pipe is raised.

  2. 77 FR 1758 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-11

    ... Amending its Price List To Eliminate the Clerk Badge Fee and the e-Broker Hand Held Device Fee January 5... of the Proposed Rule Change The Exchange proposes to amend its Price List to eliminate the Clerk... Proposed Rule Change 1. Purpose The Exchange proposes to amend its Price List to eliminate the Clerk Badge...

  3. An islanding detection methodology combining decision trees and Sandia frequency shift for inverter-based distributed generations

    DOE PAGES

    Azim, Riyasat; Li, Fangxing; Xue, Yaosuo; ...

    2017-07-14

    Distributed generations (DGs) for grid-connected applications require an accurate and reliable islanding detection methodology (IDM) for secure system operation. This paper presents an IDM for grid-connected inverter-based DGs. The proposed method is a combination of passive and active islanding detection techniques for aggregation of their advantages and elimination/minimisation of the drawbacks. In the proposed IDM, the passive method utilises critical system attributes extracted from local voltage measurements at target DG locations as well as employs decision tree-based classifiers for characterisation and detection of islanding events. The active method is based on Sandia frequency shift technique and is initiated only whenmore » the passive method is unable to differentiate islanding events from other system events. Thus, the power quality degradation introduced into the system by active islanding detection techniques can be minimised. Furthermore, a combination of active and passive techniques allows detection of islanding events under low power mismatch scenarios eliminating the disadvantage associated with the use of passive techniques alone. Finally, detailed case study results demonstrate the effectiveness of the proposed method in detection of islanding events under various power mismatch scenarios, load quality factors and in the presence of single or multiple grid-connected inverter-based DG units.« less

  4. An islanding detection methodology combining decision trees and Sandia frequency shift for inverter-based distributed generations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azim, Riyasat; Li, Fangxing; Xue, Yaosuo

    Distributed generations (DGs) for grid-connected applications require an accurate and reliable islanding detection methodology (IDM) for secure system operation. This paper presents an IDM for grid-connected inverter-based DGs. The proposed method is a combination of passive and active islanding detection techniques for aggregation of their advantages and elimination/minimisation of the drawbacks. In the proposed IDM, the passive method utilises critical system attributes extracted from local voltage measurements at target DG locations as well as employs decision tree-based classifiers for characterisation and detection of islanding events. The active method is based on Sandia frequency shift technique and is initiated only whenmore » the passive method is unable to differentiate islanding events from other system events. Thus, the power quality degradation introduced into the system by active islanding detection techniques can be minimised. Furthermore, a combination of active and passive techniques allows detection of islanding events under low power mismatch scenarios eliminating the disadvantage associated with the use of passive techniques alone. Finally, detailed case study results demonstrate the effectiveness of the proposed method in detection of islanding events under various power mismatch scenarios, load quality factors and in the presence of single or multiple grid-connected inverter-based DG units.« less

  5. Zero-point energy constraint in quasi-classical trajectory calculations.

    PubMed

    Xie, Zhen; Bowman, Joel M

    2006-04-27

    A method to constrain the zero-point energy in quasi-classical trajectory calculations is proposed and applied to the Henon-Heiles system. The main idea of this method is to smoothly eliminate the coupling terms in the Hamiltonian as the energy of any mode falls below a specified value.

  6. 78 FR 12108 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-21

    ... rule proposal methods. The FOCUS Report was designed to eliminate the overlapping regulatory reports..., Washington, DC 20549-0213. Extension: Rule 17a-5; SEC File No. 270-155, OMB Control No. 3235-0123. Notice is... information provided for in Rule 17a-5 (17 CFR 240.17a- 5), under the Securities Exchange Act of 1934 (15 U.S...

  7. Improved transformer-winding method

    NASA Technical Reports Server (NTRS)

    Mclyman, W. T.

    1978-01-01

    Proposed technique using special bobbin and fixture to wind copper wire directly on core eliminates need core cut prior to assembly. Application of technique could result in production of quieter core with increased permeability and no localized heating.

  8. Study of the Algorithm of Backtracking Decoupling and Adaptive Extended Kalman Filter Based on the Quaternion Expanded to the State Variable for Underwater Glider Navigation

    PubMed Central

    Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping

    2014-01-01

    High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method. PMID:25479331

  9. Study of the algorithm of backtracking decoupling and adaptive extended Kalman filter based on the quaternion expanded to the state variable for underwater glider navigation.

    PubMed

    Huang, Haoqian; Chen, Xiyuan; Zhou, Zhikai; Xu, Yuan; Lv, Caiping

    2014-12-03

    High accuracy attitude and position determination is very important for underwater gliders. The cross-coupling among three attitude angles (heading angle, pitch angle and roll angle) becomes more serious when pitch or roll motion occurs. This cross-coupling makes attitude angles inaccurate or even erroneous. Therefore, the high accuracy attitude and position determination becomes a difficult problem for a practical underwater glider. To solve this problem, this paper proposes backing decoupling and adaptive extended Kalman filter (EKF) based on the quaternion expanded to the state variable (BD-AEKF). The backtracking decoupling can eliminate effectively the cross-coupling among the three attitudes when pitch or roll motion occurs. After decoupling, the adaptive extended Kalman filter (AEKF) based on quaternion expanded to the state variable further smoothes the filtering output to improve the accuracy and stability of attitude and position determination. In order to evaluate the performance of the proposed BD-AEKF method, the pitch and roll motion are simulated and the proposed method performance is analyzed and compared with the traditional method. Simulation results demonstrate the proposed BD-AEKF performs better. Furthermore, for further verification, a new underwater navigation system is designed, and the three-axis non-magnetic turn table experiments and the vehicle experiments are done. The results show that the proposed BD-AEKF is effective in eliminating cross-coupling and reducing the errors compared with the conventional method.

  10. 77 FR 60489 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-03

    ... Effectiveness of a Proposed Rule To Eliminate Position and Exercise Limits for Physically- Settled SPY Options... Proposed Rule Change CBOE proposes to amend its rules to eliminate position and exercise limits for... eliminate position and exercise limits for physically-settled SPY options pursuant to a pilot program.\\5...

  11. 77 FR 37722 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-22

    ... Rule Change To Eliminate the Rules and Fees Related to the Second Market June 18, 2012. Pursuant to... Statement of the Terms of Substance of the Proposed Rule Change The Exchange proposes to eliminate the rules... Second Market. Accordingly, ISE proposes to eliminate the Second Market structure altogether and...

  12. Investigation of a novel common subexpression elimination method for low power and area efficient DCT architecture.

    PubMed

    Siddiqui, M F; Reza, A W; Kanesan, J; Ramiah, H

    2014-01-01

    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.

  13. Investigation of a Novel Common Subexpression Elimination Method for Low Power and Area Efficient DCT Architecture

    PubMed Central

    Siddiqui, M. F.; Reza, A. W.; Kanesan, J.; Ramiah, H.

    2014-01-01

    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively. PMID:25133249

  14. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Guanghua, E-mail: yan@ufl.edu; Li, Jonathan; Huang, Yin

    Purpose: To propose a simple model to explain the origin of ghost markers in marker-based optical tracking systems (OTS) and to develop retrospective strategies to detect and eliminate ghost markers. Methods: In marker-based OTS, ghost markers are virtual markers created due to the cross-talk between the two camera sensors, which can lead to system execution failure or inaccuracy in patient tracking. As a result, the users have to limit the number of markers and avoid certain marker configurations to reduce the chances of ghost markers. In this work, the authors propose retrospective strategies to detect and eliminate ghost markers. Themore » two camera sensors were treated as mathematical points in space. The authors identified the coplanar within limit (CWL) condition as the necessary condition for ghost marker occurrence. A simple ghost marker detection method was proposed based on the model. Ghost marker elimination was achieved through pattern matching: a ghost marker-free reference set was matched with the optical marker set observed by the OTS; unmatched optical markers were eliminated as either ghost markers or misplaced markers. The pattern matching problem was formulated as a constraint satisfaction problem (using pairwise distances as constraints) and solved with an iterative backtracking algorithm. Wildcard markers were introduced to address missing or misplaced markers. An experiment was designed to measure the sensor positions and the limit for the CWL condition. The ghost marker detection and elimination algorithms were verified with samples collected from a five-marker jig and a nine-marker anthropomorphic phantom, rotated with the treatment couch from −60° to +60°. The accuracy of the pattern matching algorithm was further validated with marker patterns from 40 patients who underwent stereotactic body radiotherapy (SBRT). For this purpose, a synthetic optical marker pattern was created for each patient by introducing ghost markers, marker position uncertainties, and marker displacement. Results: The sensor positions and the limit for the CWL condition were measured with excellent reproducibility (standard deviation ≤ 0.39 mm). The ghost marker detection algorithm had perfect detection accuracy for both the jig (1544 samples) and the anthropomorphic phantom (2045 samples). Pattern matching was successful for all samples from both phantoms as well as the 40 patient marker patterns. Conclusions: The authors proposed a simple model to explain the origin of ghost markers and identified the CWL condition as the necessary condition for ghost marker occurrence. The retrospective ghost marker detection and elimination algorithms guarantee complete ghost marker elimination while providing the users with maximum flexibility in selecting the number of markers and their configuration to meet their clinic needs.« less

  16. Cluster-Based Multipolling Sequencing Algorithm for Collecting RFID Data in Wireless LANs

    NASA Astrophysics Data System (ADS)

    Choi, Woo-Yong; Chatterjee, Mainak

    2015-03-01

    With the growing use of RFID (Radio Frequency Identification), it is becoming important to devise ways to read RFID tags in real time. Access points (APs) of IEEE 802.11-based wireless Local Area Networks (LANs) are being integrated with RFID networks that can efficiently collect real-time RFID data. Several schemes, such as multipolling methods based on the dynamic search algorithm and random sequencing, have been proposed. However, as the number of RFID readers associated with an AP increases, it becomes difficult for the dynamic search algorithm to derive the multipolling sequence in real time. Though multipolling methods can eliminate the polling overhead, we still need to enhance the performance of the multipolling methods based on random sequencing. To that extent, we propose a real-time cluster-based multipolling sequencing algorithm that drastically eliminates more than 90% of the polling overhead, particularly so when the dynamic search algorithm fails to derive the multipolling sequence in real time.

  17. A Novel Clustering Method Curbing the Number of States in Reinforcement Learning

    NASA Astrophysics Data System (ADS)

    Kotani, Naoki; Nunobiki, Masayuki; Taniguchi, Kenji

    We propose an efficient state-space construction method for a reinforcement learning. Our method controls the number of categories with improving the clustering method of Fuzzy ART which is an autonomous state-space construction method. The proposed method represents weight vector as the mean value of input vectors in order to curb the number of new categories and eliminates categories whose state values are low to curb the total number of categories. As the state value is updated, the size of category becomes small to learn policy strictly. We verified the effectiveness of the proposed method with simulations of a reaching problem for a two-link robot arm. We confirmed that the number of categories was reduced and the agent achieved the complex task quickly.

  18. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata

    PubMed Central

    Liu, Aiming; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi

    2017-01-01

    Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain–computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain–computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain–computer interface systems. PMID:29117100

  19. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata.

    PubMed

    Liu, Aiming; Chen, Kun; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi

    2017-11-08

    Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain-computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain-computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain-computer interface systems.

  20. 76 FR 76204 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-06

    ... Proposed Rule Change To Eliminate Exchange Direct Orders November 30, 2011. Pursuant to Section 19(b)(1) of... (``Commission'') a proposal for the NASDAQ Options Market (``NOM'') to eliminate Exchange Direct Orders... Direct Orders from its rules. The Exchange proposes to eliminate this order type, effective November 30...

  1. 75 FR 32525 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Order Approving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-08

    ... Relating to FINRA Rule 9554 To Eliminate Explicitly the Inability-To-Pay Defense in the Expedited... thereunder,\\2\\ a proposed rule change to FINRA Rule 9554 to eliminate explicitly the inability-to-pay defense... Proposed Rule Change FINRA proposed to amend FINRA Rule 9554 to eliminate explicitly the inability-to-pay...

  2. Modified slanted-edge method for camera modulation transfer function measurement using nonuniform fast Fourier transform technique

    NASA Astrophysics Data System (ADS)

    Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin

    2018-01-01

    ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.

  3. Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth

    NASA Astrophysics Data System (ADS)

    Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.

    2017-09-01

    With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.

  4. Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing

    2016-04-14

    In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.

  5. An intelligent sales forecasting system through integration of artificial neural networks and fuzzy neural networks with fuzzy weight elimination.

    PubMed

    Kuo, R J; Wu, P; Wang, C P

    2002-09-01

    Sales forecasting plays a very prominent role in business strategy. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average (ARMA). However, sales forecasting is very complicated owing to influence by internal and external environments. Recently, artificial neural networks (ANNs) have also been applied in sales forecasting since their promising performances in the areas of control and pattern recognition. However, further improvement is still necessary since unique circumstances, e.g. promotion, cause a sudden change in the sales pattern. Thus, this study utilizes a proposed fuzzy neural network (FNN), which is able to eliminate the unimportant weights, for the sake of learning fuzzy IF-THEN rules obtained from the marketing experts with respect to promotion. The result from FNN is further integrated with the time series data through an ANN. Both the simulated and real-world problem results show that FNN with weight elimination can have lower training error compared with the regular FNN. Besides, real-world problem results also indicate that the proposed estimation system outperforms the conventional statistical method and single ANN in accuracy.

  6. [The validation of the effect of correcting spectral background changes based on floating reference method by simulation].

    PubMed

    Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin

    2015-02-01

    There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.

  7. A walkthrough solution to the boundary overlap problem

    Treesearch

    Mark J. Ducey; Jeffrey H. Gove; Harry T. Valentine

    2004-01-01

    Existing methods for eliminating bias due to boundary overlap suffer some disadvantages in practical use, including the need to work outside the tract, restrictions on the kinds of boundaries to which they are applicable, and the possibility of significantly increased variance as a price for unbiasedness. We propose a new walkthrough method for reducing boundary...

  8. 77 FR 16288 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-20

    ... Eliminate the 100MB Connectivity Option and Fee March 14, 2012. Pursuant to Section 19(b)(1) of the... proposes to eliminate 100MB connectivity between the Exchange and co-located servers, as well as associated... Proposed Rule Change 1. Purpose The Exchange proposes to modify Rule 7034(b) to eliminate 100MB...

  9. 76 FR 6167 - Self-Regulatory Organizations; New York Stock Exchange LLC; Order Approving Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-03

    ... Amendment No. 1, in Connection With the Proposal of NYSE Euronext To Eliminate the Requirement of an 80... Bylaws to eliminate the requirement that the affirmative vote of the holders of not less than 80% of the... that the proposed rule change to amend the Corporation's Bylaws to eliminate the 80% supermajority...

  10. 77 FR 39547 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Designation of a Longer Period for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-03

    ... Proposed Rule Change Amending Commentary .07 to NYSE Amex Options Rule 904 To Eliminate Position Limits for... Act of 1934 (the ``Act'') \\2\\ and Rule 19b-4 thereunder,\\3\\ a proposed rule change to eliminate... side of the market. The proposal would amend Commentary .07 to NYSE Amex Options Rule 904 to eliminate...

  11. Near-Infrared Spectrum Detection of Wheat Gluten Protein Content Based on a Combined Filtering Method.

    PubMed

    Cai, Jian-Hua

    2017-09-01

    To eliminate the random error of the derivative near-IR (NIR) spectrum and to improve model stability and the prediction accuracy of the gluten protein content, a combined method is proposed for pretreatment of the NIR spectrum based on both empirical mode decomposition and the wavelet soft-threshold method. The principle and the steps of the method are introduced and the denoising effect is evaluated. The wheat gluten protein content is calculated based on the denoised spectrum, and the results are compared with those of the nine-point smoothing method and the wavelet soft-threshold method. Experimental results show that the proposed combined method is effective in completing pretreatment of the NIR spectrum, and the proposed method improves the accuracy of detection of wheat gluten protein content from the NIR spectrum.

  12. Moving target detection method based on improved Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.

  13. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  14. Enhanced facial texture illumination normalization for face recognition.

    PubMed

    Luo, Yong; Guan, Ye-Peng

    2015-08-01

    An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.

  15. A method exploiting direct communication between phasor measurement units for power system wide-area protection and control algorithms.

    PubMed

    Almas, Muhammad Shoaib; Vanfretti, Luigi

    2017-01-01

    Synchrophasor measurements from Phasor Measurement Units (PMUs) are the primary sensors used to deploy Wide-Area Monitoring, Protection and Control (WAMPAC) systems. PMUs stream out synchrophasor measurements through the IEEE C37.118.2 protocol using TCP/IP or UDP/IP. The proposed method establishes a direct communication between two PMUs, thus eliminating the requirement of an intermediate phasor data concentrator, data mediator and/or protocol parser and thereby ensuring minimum communication latency without considering communication link delays. This method allows utilizing synchrophasor measurements internally in a PMU to deploy custom protection and control algorithms. These algorithms are deployed using protection logic equations which are supported by all the PMU vendors. Moreover, this method reduces overall equipment cost as the algorithms execute internally in a PMU and therefore does not require any additional controller for their deployment. The proposed method can be utilized for fast prototyping of wide-area measurements based protection and control applications. The proposed method is tested by coupling commercial PMUs as Hardware-in-the-Loop (HIL) with Opal-RT's eMEGAsim Real-Time Simulator (RTS). As illustrative example, anti-islanding protection application is deployed using proposed method and its performance is assessed. The essential points in the method are: •Bypassing intermediate phasor data concentrator or protocol parsers as the synchrophasors are communicated directly between the PMUs (minimizes communication delays).•Wide Area Protection and Control Algorithm is deployed using logic equations in the client PMU, therefore eliminating the requirement for an external hardware controller (cost curtailment)•Effortless means to exploit PMU measurements in an environment familiar to protection engineers.

  16. 75 FR 51859 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-23

    ... Proposed Rule Change To Delete Rule 4770 in Its Entirety and To Eliminate a Related Reference From the... entirety from the NASDAQ rulebook and to also eliminate a reference to Rule 4770 from Rule 4751(f)(13). The... Proposed Rule Change 1. Purpose NASDAQ is proposing to eliminate Rule 4770 in its entirety. Rule 4770 sets...

  17. On optimal improvements of classical iterative schemes for Z-matrices

    NASA Astrophysics Data System (ADS)

    Noutsos, D.; Tzoumas, M.

    2006-04-01

    Many researchers have considered preconditioners, applied to linear systems, whose matrix coefficient is a Z- or an M-matrix, that make the associated Jacobi and Gauss-Seidel methods converge asymptotically faster than the unpreconditioned ones. Such preconditioners are chosen so that they eliminate the off-diagonal elements of the same column or the elements of the first upper diagonal [Milaszewicz, LAA 93 (1987) 161-170], Gunawardena et al. [LAA 154-156 (1991) 123-143]. In this work we generalize the previous preconditioners to obtain optimal methods. "Good" Jacobi and Gauss-Seidel algorithms are given and preconditioners, that eliminate more than one entry per row, are also proposed and analyzed. Moreover, the behavior of the above preconditioners to the Krylov subspace methods is studied.

  18. Frequency-varying synchronous micro-vibration suppression for a MSFW with application of small-gain theorem

    NASA Astrophysics Data System (ADS)

    Peng, Cong; Fan, Yahong; Huang, Ziyuan; Han, Bangcheng; Fang, Jiancheng

    2017-01-01

    This paper presents a novel synchronous micro-vibration suppression method on the basis of the small gain theorem to reduce the frequency-varying synchronous micro-vibration forces for a magnetically suspended flywheel (MSFW). The proposed synchronous micro-vibration suppression method not only eliminates the synchronous current fluctuations to force the rotor spinning around the inertia axis, but also considers the compensation caused by the displacement stiffness in the permanent-magnet (PM)-biased magnetic bearings. Moreover, the stability of the proposed control system is exactly analyzed by using small gain theorem. The effectiveness of the proposed micro-vibration suppression method is demonstrated via the direct measurement of the disturbance forces for a MSFW. The main merit of the proposed method is that it provides a simple and practical method in suppressing the frequency varying micro-vibration forces and preserving the nominal performance of the baseline control system.

  19. Application of quantum-behaved particle swarm optimization to motor imagery EEG classification.

    PubMed

    Hsu, Wei-Yen

    2013-12-01

    In this study, we propose a recognition system for single-trial analysis of motor imagery (MI) electroencephalogram (EEG) data. Applying event-related brain potential (ERP) data acquired from the sensorimotor cortices, the system chiefly consists of automatic artifact elimination, feature extraction, feature selection and classification. In addition to the use of independent component analysis, a similarity measure is proposed to further remove the electrooculographic (EOG) artifacts automatically. Several potential features, such as wavelet-fractal features, are then extracted for subsequent classification. Next, quantum-behaved particle swarm optimization (QPSO) is used to select features from the feature combination. Finally, selected sub-features are classified by support vector machine (SVM). Compared with without artifact elimination, feature selection using a genetic algorithm (GA) and feature classification with Fisher's linear discriminant (FLD) on MI data from two data sets for eight subjects, the results indicate that the proposed method is promising in brain-computer interface (BCI) applications.

  20. Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing

    2016-01-01

    In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345

  1. A New Newton-Like Iterative Method for Roots of Analytic Functions

    ERIC Educational Resources Information Center

    Otolorin, Olayiwola

    2005-01-01

    A new Newton-like iterative formula for the solution of non-linear equations is proposed. To derive the formula, the convergence criteria of the one-parameter iteration formula, and also the quasilinearization in the derivation of Newton's formula are reviewed. The result is a new formula which eliminates the limitations of other methods. There is…

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, T.; Hu, M.; Guo, Q.

    Here we report a study of printing of electronics using an office use laser printer. The proposed method eliminates those critical disadvantages of solvent-based printing techniques by taking the advantages of electroless deposition and laser printing. The synthesized toner acts as a catalyst for the electroless copper deposition as well as an adhesion-promoting buffer layer between the substrate and deposited copper. The easy metallization of printed patterns and strong metal-substrate adhesion make it an especially effective method for massive production of flexible printed circuits. The proposed process is a high throughput, low cost, efficient, and environmentally benign method for flexiblemore » electronics manufacturing.« less

  3. A simple transformation independent method for outlier definition.

    PubMed

    Johansen, Martin Berg; Christensen, Peter Astrup

    2018-04-10

    Definition and elimination of outliers is a key element for medical laboratories establishing or verifying reference intervals (RIs). Especially as inclusion of just a few outlying observations may seriously affect the determination of the reference limits. Many methods have been developed for definition of outliers. Several of these methods are developed for the normal distribution and often data require transformation before outlier elimination. We have developed a non-parametric transformation independent outlier definition. The new method relies on drawing reproducible histograms. This is done by using defined bin sizes above and below the median. The method is compared to the method recommended by CLSI/IFCC, which uses Box-Cox transformation (BCT) and Tukey's fences for outlier definition. The comparison is done on eight simulated distributions and an indirect clinical datasets. The comparison on simulated distributions shows that without outliers added the recommended method in general defines fewer outliers. However, when outliers are added on one side the proposed method often produces better results. With outliers on both sides the methods are equally good. Furthermore, it is found that the presence of outliers affects the BCT, and subsequently affects the determined limits of current recommended methods. This is especially seen in skewed distributions. The proposed outlier definition reproduced current RI limits on clinical data containing outliers. We find our simple transformation independent outlier detection method as good as or better than the currently recommended methods.

  4. Recent proposals to limit Medigap coverage and modify Medicare cost sharing.

    PubMed

    Linehan, Kathryn

    2012-02-24

    As policymakers look for savings from the Medicare program, some have proposed eliminating or discouraging "first-dollar coverage" available through privately purchased Medigap policies. Medigap coverage, which beneficiaries obtain to protect themselves from Medicare's cost-sharing requirements and its lack of a cap on out-of-pocket spending, may discourage the judicious use of medical services by reducing or eliminating beneficiary cost sharing. It is estimated that eliminating such coverage, which has been shown to be associated with higher Medicare spending, and requiring some cost sharing would encourage beneficiaries to reduce their service use and thus reduce pro­gram spending. However, eliminating first-dollar coverage could cause some beneficiaries to incur higher spending or forego necessary services. Some policy proposals to eliminate first-dollar coverage would also modify Medicare's cost sharing and add an out-of-pocket spending cap for fee-for-service Medicare. This paper discusses Medicare's current cost-sharing requirements, Medigap insurance, and proposals to modify Medicare's cost sharing and eliminate first-dollar coverage in Medigap plans. It reviews the evidence on the effects of first-dollar coverage on spending, some objections to eliminating first-dollar coverage, and results of research that has modeled the impact of eliminating first-dollar coverage, modifying Medicare's cost-sharing requirements, and adding an out-of-pocket limit on beneficiaries' spending.

  5. 76 FR 11830 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-03

    ... Effectiveness of Proposed Rule Change to Eliminate Duplicative Filings Under FINRA Rule 9610(a) February 25... the proposed change will make the process of seeking exemptive relief more efficient by eliminating... the efficiency of the exemptive relief process by eliminating duplicative filings and providing...

  6. A regional method for craniofacial reconstruction based on coordinate adjustments and a new fusion strategy.

    PubMed

    Deng, Qingqiong; Zhou, Mingquan; Wu, Zhongke; Shui, Wuyang; Ji, Yuan; Wang, Xingce; Liu, Ching Yiu Jessica; Huang, Youliang; Jiang, Haiyan

    2016-02-01

    Craniofacial reconstruction recreates a facial outlook from the cranium based on the relationship between the face and the skull to assist identification. But craniofacial structures are very complex, and this relationship is not the same in different craniofacial regions. Several regional methods have recently been proposed, these methods segmented the face and skull into regions, and the relationship of each region is then learned independently, after that, facial regions for a given skull are estimated and finally glued together to generate a face. Most of these regional methods use vertex coordinates to represent the regions, and they define a uniform coordinate system for all of the regions. Consequently, the inconsistence in the positions of regions between different individuals is not eliminated before learning the relationships between the face and skull regions, and this reduces the accuracy of the craniofacial reconstruction. In order to solve this problem, an improved regional method is proposed in this paper involving two types of coordinate adjustments. One is the global coordinate adjustment performed on the skulls and faces with the purpose to eliminate the inconsistence of position and pose of the heads; the other is the local coordinate adjustment performed on the skull and face regions with the purpose to eliminate the inconsistence of position of these regions. After these two coordinate adjustments, partial least squares regression (PLSR) is used to estimate the relationship between the face region and the skull region. In order to obtain a more accurate reconstruction, a new fusion strategy is also proposed in the paper to maintain the reconstructed feature regions when gluing the facial regions together. This is based on the observation that the feature regions usually have less reconstruction errors compared to rest of the face. The results demonstrate that the coordinate adjustments and the new fusion strategy can significantly improve the craniofacial reconstructions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Ultrasensitive low noise voltage amplifier for spectral analysis.

    PubMed

    Giusi, G; Crupi, F; Pace, C

    2008-08-01

    Recently we have proposed several voltage noise measurement methods that allow, at least in principle, the complete elimination of the noise introduced by the measurement amplifier. The most severe drawback of these methods is that they require a multistep measurement procedure. Since environmental conditions may change in the different measurement steps, the final result could be affected by these changes. This problem is solved by the one-step voltage noise measurement methodology based on a novel amplifier topology proposed in this paper. Circuit implementations for the amplifier building blocks based on operational amplifiers are critically discussed. The proposed approach is validated through measurements performed on a prototype circuit.

  8. A deblocking algorithm based on color psychology for display quality enhancement

    NASA Astrophysics Data System (ADS)

    Yeh, Chia-Hung; Tseng, Wen-Yu; Huang, Kai-Lin

    2012-12-01

    This article proposes a post-processing deblocking filter to reduce blocking effects. The proposed algorithm detects blocking effects by fusing the results of Sobel edge detector and wavelet-based edge detector. The filtering stage provides four filter modes to eliminate blocking effects at different color regions according to human color vision and color psychology analysis. Experimental results show that the proposed algorithm has better subjective and objective qualities for H.264/AVC reconstructed videos when compared to several existing methods.

  9. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  10. Algebraic solution for the forward displacement analysis of the general 6-6 stewart mechanism

    NASA Astrophysics Data System (ADS)

    Wei, Feng; Wei, Shimin; Zhang, Ying; Liao, Qizheng

    2016-01-01

    The solution for the forward displacement analysis(FDA) of the general 6-6 Stewart mechanism(i.e., the connection points of the moving and fixed platforms are not restricted to lying in a plane) has been extensively studied, but the efficiency of the solution remains to be effectively addressed. To this end, an algebraic elimination method is proposed for the FDA of the general 6-6 Stewart mechanism. The kinematic constraint equations are built using conformal geometric algebra(CGA). The kinematic constraint equations are transformed by a substitution of variables into seven equations with seven unknown variables. According to the characteristic of anti-symmetric matrices, the aforementioned seven equations can be further transformed into seven equations with four unknown variables by a substitution of variables using the Gröbner basis. Its elimination weight is increased through changing the degree of one variable, and sixteen equations with four unknown variables can be obtained using the Gröbner basis. A 40th-degree univariate polynomial equation is derived by constructing a relatively small-sized 9´9 Sylvester resultant matrix. Finally, two numerical examples are employed to verify the proposed method. The results indicate that the proposed method can effectively improve the efficiency of solution and reduce the computational burden because of the small-sized resultant matrix.

  11. Multiple model self-tuning control for a class of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Huang, Miao; Wang, Xin; Wang, Zhenlei

    2015-10-01

    This study develops a novel nonlinear multiple model self-tuning control method for a class of nonlinear discrete-time systems. An increment system model and a modified robust adaptive law are proposed to expand the application range, thus eliminating the assumption that either the nonlinear term of the nonlinear system or its differential term is global-bounded. The nonlinear self-tuning control method can address the situation wherein the nonlinear system is not subject to a globally uniformly asymptotically stable zero dynamics by incorporating the pole-placement scheme. A novel, nonlinear control structure based on this scheme is presented to improve control precision. Stability and convergence can be confirmed when the proposed multiple model self-tuning control method is applied. Furthermore, simulation results demonstrate the effectiveness of the proposed method.

  12. Video-based noncooperative iris image segmentation.

    PubMed

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  13. Correlation Filter Learning Toward Peak Strength for Visual Tracking.

    PubMed

    Sui, Yao; Wang, Guanghui; Zhang, Li

    2018-04-01

    This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.

  14. A synergistic method for vibration suppression of an elevator mechatronic system

    NASA Astrophysics Data System (ADS)

    Knezevic, Bojan Z.; Blanusa, Branko; Marcetic, Darko P.

    2017-10-01

    Modern elevators are complex mechatronic systems which have to satisfy high performance in precision, safety and ride comfort. Each elevator mechatronic system (EMS) contains a mechanical subsystem which is characterized by its resonant frequency. In order to achieve high performance of the whole system, the control part of the EMS inevitably excites resonant circuits causing the occurrence of vibration. This paper proposes a synergistic solution based on the jerk control and the upgrade of the speed controller with a band-stop filter to restore lost ride comfort and speed control caused by vibration. The band-stop filter eliminates the resonant component from the speed controller spectra and jerk control provides operating of the speed controller in a linear mode as well as increased ride comfort. The original method for band-stop filter tuning based on Goertzel algorithm and Kiefer search algorithm is proposed in this paper. In order to generate the speed reference trajectory which can be defined by different shapes and amplitudes of jerk, a unique generalized model is proposed. The proposed algorithm is integrated in the power drive control algorithm and implemented on the digital signal processor. Through experimental verifications on a scale down prototype of the EMS it has been verified that only synergistic effect of controlling jerk and filtrating the reference torque can completely eliminate vibrations.

  15. 76 FR 62126 - Self-Regulatory Organizations; Chicago Stock Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-06

    ... Change To Eliminate Certain References to the Exchange Acting as the Designated Examining Authority... Rule Change CHX proposes to amend its rules to eliminate certain references to the Exchange acting as... references and the Exchange plans on eliminating those in a subsequent proposal to conform our rules with...

  16. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar

    PubMed Central

    Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping

    2015-01-01

    A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385

  17. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar.

    PubMed

    Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping

    2015-12-14

    A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.

  18. [Rapid detection of caffeine in blood by freeze-out extraction].

    PubMed

    Bekhterev, V N; Gavrilova, S N; Kozina, E P; Maslakov, I V

    2010-01-01

    A new method for the detection of caffeine in blood has been proposed based on the combination of extraction and freezing-out to eliminate the influence of sample matrix. Metrological characteristics of the method are presented. Selectivity of detection is achieved by optimal conditions of analysis by high performance liquid chromatography. The method is technically simple and cost-efficient, it ensures rapid performance of the studies.

  19. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  20. Residual translation compensations in radar target narrowband imaging based on trajectory information

    NASA Astrophysics Data System (ADS)

    Yue, Wenjue; Peng, Bo; Wei, Xizhang; Li, Xiang; Liao, Dongping

    2018-05-01

    High velocity translation will result in defocusing scattering centers in radar imaging. In this paper, we propose a Residual Translation Compensations (RTC) method based on target trajectory information to eliminate the translation effects in radar imaging. Translation could not be simply regarded as a uniformly accelerated motion in reality. So the prior knowledge of the target trajectory is introduced to enhance compensation precision. First we use the two-body orbit model to figure out the radial distance. Then, stepwise compensations are applied to eliminate residual propagation delay based on conjugate multiplication method. Finally, tomography is used to confirm the validity of the method. Compare with translation parameters estimation method based on the spectral peak of the conjugate multiplied signal, RTC method in this paper enjoys a better tomography result. When the Signal Noise Ratio (SNR) of the radar echo signal is 4dB, the scattering centers can also be extracted clearly.

  1. A seismic coherency method using spectral amplitudes

    NASA Astrophysics Data System (ADS)

    Sui, Jing-Kun; Zheng, Xiao-Dong; Li, Yan-Dong

    2015-09-01

    Seismic coherence is used to detect discontinuities in underground media. However, strata with steeply dipping structures often produce false low coherence estimates and thus incorrect discontinuity characterization results. It is important to eliminate or reduce the effect of dipping on coherence estimates. To solve this problem, time-domain dip scanning is typically used to improve estimation of coherence in areas with steeply dipping structures. However, the accuracy of the time-domain estimation of dip is limited by the sampling interval. In contrast, the spectrum amplitude is not affected by the time delays in adjacent seismic traces caused by dipping structures. We propose a coherency algorithm that uses the spectral amplitudes of seismic traces within a predefined analysis window to construct the covariance matrix. The coherency estimates with the proposed algorithm is defined as the ratio between the dominant eigenvalue and the sum of all eigenvalues of the constructed covariance matrix. Thus, we eliminate the effect of dipping structures on coherency estimates. In addition, because different frequency bands of spectral amplitudes are used to estimate coherency, the proposed algorithm has multiscale features. Low frequencies are effective for characterizing large-scale faults, whereas high frequencies are better in characterizing small-scale faults. Application to synthetic and real seismic data show that the proposed algorithm can eliminate the effect of dip and produce better coherence estimates than conventional coherency algorithms in areas with steeply dipping structures.

  2. Chosen-plaintext attack on a joint transform correlator encrypting system

    NASA Astrophysics Data System (ADS)

    Barrera, John Fredy; Vargas, Carlos; Tebaldi, Myrian; Torroba, Roberto

    2010-10-01

    We demonstrate that optical encryption methods based on the joint transform correlator architecture are vulnerable to chosen-plaintext attack. An unauthorized user, who introduces three chosen plaintexts in the accessible encryption machine, can obtain the security key code mask. In this contribution, we also propose an alternative method to eliminate ambiguities that allows obtaining the right decrypting key.

  3. Computational domain discretization in numerical analysis of flow within granular materials

    NASA Astrophysics Data System (ADS)

    Sosnowski, Marcin

    2018-06-01

    The discretization of computational domain is a crucial step in Computational Fluid Dynamics (CFD) because it influences not only the numerical stability of the analysed model but also the agreement of obtained results and real data. Modelling flow in packed beds of granular materials is a very challenging task in terms of discretization due to the existence of narrow spaces between spherical granules contacting tangentially in a single point. Standard approach to this issue results in a low quality mesh and unreliable results in consequence. Therefore the common method is to reduce the diameter of the modelled granules in order to eliminate the single-point contact between the individual granules. The drawback of such method is the adulteration of flow and contact heat resistance among others. Therefore an innovative method is proposed in the paper: single-point contact is extended to a cylinder-shaped volume contact. Such approach eliminates the low quality mesh elements and simultaneously introduces only slight distortion to the flow as well as contact heat transfer. The performed analysis of numerous test cases prove the great potential of the proposed method of meshing the packed beds of granular materials.

  4. Quantum-enhanced feature selection with forward selection and backward elimination

    NASA Astrophysics Data System (ADS)

    He, Zhimin; Li, Lvzhou; Huang, Zhiming; Situ, Haozhen

    2018-07-01

    Feature selection is a well-known preprocessing technique in machine learning, which can remove irrelevant features to improve the generalization capability of a classifier and reduce training and inference time. However, feature selection is time-consuming, particularly for the applications those have thousands of features, such as image retrieval, text mining and microarray data analysis. It is crucial to accelerate the feature selection process. We propose a quantum version of wrapper-based feature selection, which converts a classical feature selection to its quantum counterpart. It is valuable for machine learning on quantum computer. In this paper, we focus on two popular kinds of feature selection methods, i.e., wrapper-based forward selection and backward elimination. The proposed feature selection algorithm can quadratically accelerate the classical one.

  5. 3D digital image correlation using a single 3CCD colour camera and dichroic filter

    NASA Astrophysics Data System (ADS)

    Zhong, F. Q.; Shao, X. X.; Quan, C.

    2018-04-01

    In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.

  6. Memory-optimized shift operator alternating direction implicit finite difference time domain method for plasma

    NASA Astrophysics Data System (ADS)

    Song, Wanjun; Zhang, Hou

    2017-11-01

    Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.

  7. 77 FR 20870 - Self-Regulatory Organizations; New York Stock Exchange LLC; Order Approving a Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-06

    ... Definition of Approved Person To Exclude Foreign Affiliates, Eliminating the Application Process for Approved... to exclude foreign affiliates, eliminate the application process for approved persons, and make... Rules 304, 308, and 311. The Exchange also proposed to eliminate use of the Forms AP-1 and AD-G. The...

  8. Margin-maximizing feature elimination methods for linear and nonlinear kernel-based discriminant functions.

    PubMed

    Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X

    2010-05-01

    Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.

  9. Combining 1D and 2D linear discriminant analysis for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Ji, Hongbing; Wang, Lei; Lin, Lin

    2011-11-01

    In this paper, a novel feature extraction method for palmprint recognition termed as Two-dimensional Combined Discriminant Analysis (2DCDA) is proposed. By connecting the adjacent rows of a image sequentially, the obtained new covariance matrices contain the useful information among local geometry structures in the image, which is eliminated by 2DLDA. In this way, 2DCDA combines LDA and 2DLDA for a promising recognition accuracy, but the number of coefficients of its projection matrix is lower than that of other two-dimensional methods. Experimental results on the CASIA palmprint database demonstrate the effectiveness of the proposed method.

  10. An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jin; Nelson, Karl E.

    Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.

  11. An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives

    DOE PAGES

    Yao, Jin; Nelson, Karl E.

    2018-01-24

    Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.

  12. General method for eliminating wave reflection in 2D photonic crystal waveguides by introducing extra scatterers based on interference cancellation of waves

    NASA Astrophysics Data System (ADS)

    Huang, Hao; Ouyang, Zhengbiao

    2018-01-01

    We propose a general method for eliminating the reflection of waves in 2 dimensional photonic crystal waveguides (2D-PCWs), a kind of 2D material, by introducing extra scatterers inside the 2D-PCWs. The intrinsic reflection in 2D-PCWs is compensated by the backward-scattered waves from these scatterers, so that the overall reflection is greatly reduced and the insertion loss is improved accordingly. We first present the basic theory for the compensation method. Then, as a demonstration, we give four examples of extremely-low-reflection and high-transmission 90°bent 2D-PCWs created according to the method proposed. In the four examples, it is demonstrated by plane-wave expansion method and finite-difference time-domain method that the 90°bent 2D-PCWs can have high transmission ratio greater than 90% in a wide range of operating frequency, and the highest transmission ratio can be greater than 99.95% with a return loss higher than 43 dB, better than that in other typical 90°bent 2D-PCWs. With our method, the bent 2D-PCWs can be optimized to obtain high transmission ratio at different operating wavelengths. As a further application of this method, a waveguide-based optical bridge for light crossing is presented, showing an optimum return loss of 46.85 dB, transmission ratio of 99.95%, and isolation rates greater than 41.77 dB. The method proposed provides also a useful way for improving conventional waveguides made of cables, fibers, or metal walls in the optical, infrared, terahertz, and microwave bands.

  13. Recovering of images degraded by atmosphere

    NASA Astrophysics Data System (ADS)

    Lin, Guang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2017-08-01

    Remote sensing images are seriously degraded by multiple scattering and bad weather. Through the analysis of the radiative transfer procedure in atmosphere, an image atmospheric degradation model considering the influence of atmospheric absorption multiple scattering and non-uniform distribution is proposed in this paper. Based on the proposed model, a novel recovering method is presented to eliminate atmospheric degradation. Mean-shift image segmentation and block-wise deconvolution are used to reduce time cost, retaining a good result. The recovering results indicate that the proposed method can significantly remove atmospheric degradation and effectively improve contrast compared with other removal methods. The results also illustrate that our method is suitable for various degraded remote sensing, including images with large field of view (FOV), images taken in side-glance situations, image degraded by atmospheric non-uniform distribution and images with various forms of clouds.

  14. Limited-memory trust-region methods for sparse relaxation

    NASA Astrophysics Data System (ADS)

    Adhikari, Lasith; DeGuchy, Omar; Erway, Jennifer B.; Lockhart, Shelby; Marcia, Roummel F.

    2017-08-01

    In this paper, we solve the l2-l1 sparse recovery problem by transforming the objective function of this problem into an unconstrained differentiable function and applying a limited-memory trust-region method. Unlike gradient projection-type methods, which uses only the current gradient, our approach uses gradients from previous iterations to obtain a more accurate Hessian approximation. Numerical experiments show that our proposed approach eliminates spurious solutions more effectively while improving computational time.

  15. Reduction of patulin in apple cider by UV radiation.

    PubMed

    Dong, Qingfang; Manns, David C; Feng, Guoping; Yue, Tianli; Churey, John J; Worobo, Randy W

    2010-01-01

    The presence of the mycotoxin patulin in processed apple juice and cider presents a continual challenge to the food industry as both consumer health and product quality issues. Although several methods for control and/or elimination of patulin have been proposed, no unifying method has been commercially successful for reducing patulin burdens while maintaining product quality. In the present study, exposure to germicidal UV radiation was evaluated as a possible commercially viable alternative for the reduction and possible elimination of the patulin mycotoxin in fresh apple cider. UV exposure of 14.2 to 99.4 mJ/cm(2) resulted in a significant and nearly linear decrease in patulin levels while producing no quantifiable changes in the chemical composition (i.e., pH, Brix, and total acids) or organoleptic properties of the cider. For the range of UV doses tested, patulin levels decreased by 9.4 to 43.4%; the greatest reduction was achieved after less than 15 s of UV exposure. The method of UV radiation (the CiderSure 3500 system) is an easily implemented, high-throughput, and cost-effective method that offers simultaneous UV pasteurization of cider and juice products and reduction and/or elimination of patulin without unwanted alterations in the final product.

  16. A Novel Robot Visual Homing Method Based on SIFT Features

    PubMed Central

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-01-01

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method. PMID:26473880

  17. 77 FR 34315 - National Pollutant Discharge Elimination System-Proposed Regulations to Establish Requirements...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-11

    ...-4] RIN 2040-AE95 National Pollutant Discharge Elimination System--Proposed Regulations to Establish Requirements for Cooling Water Intake Structures at Existing Facilities; Notice of Data Availability Related to... Data Availability. SUMMARY: On April 20, 2011, EPA published proposed standards for cooling water...

  18. 77 FR 34927 - National Pollutant Discharge Elimination System-Proposed Regulations To Establish Requirements...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-12

    ...-5] RIN 2040-AE95 National Pollutant Discharge Elimination System--Proposed Regulations To Establish Requirements for Cooling Water Intake Structures at Existing Facilities; Notice of Data Availability Related to... availability. SUMMARY: On April 20, 2011, EPA published proposed standards for cooling water intake structures...

  19. Integration of scheduling and discrete event simulation systems to improve production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.

    2016-08-01

    The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.

  20. Polymers used to absorb fats and oils: A concept

    NASA Technical Reports Server (NTRS)

    Marsh, H. E., Jr.

    1974-01-01

    One approach to problem of excessive oils and fats is to develop method by which oil is absorbed into solid mixture for elimination as solid waste. Materials proposed for these purposes are cross-linked (network) polymers that have high affinity for aliphatic substances, i. e., petroleum, animal, and vegetable oils.

  1. Element-by-element Solution Procedures for Nonlinear Structural Analysis

    NASA Technical Reports Server (NTRS)

    Hughes, T. J. R.; Winget, J. M.; Levit, I.

    1984-01-01

    Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.

  2. Eliminating fast reactions in stochastic simulations of biochemical networks: A bistable genetic switch

    NASA Astrophysics Data System (ADS)

    Morelli, Marco J.; Allen, Rosalind J.; Tǎnase-Nicola, Sorin; ten Wolde, Pieter Rein

    2008-01-01

    In many stochastic simulations of biochemical reaction networks, it is desirable to "coarse grain" the reaction set, removing fast reactions while retaining the correct system dynamics. Various coarse-graining methods have been proposed, but it remains unclear which methods are reliable and which reactions can safely be eliminated. We address these issues for a model gene regulatory network that is particularly sensitive to dynamical fluctuations: a bistable genetic switch. We remove protein-DNA and/or protein-protein association-dissociation reactions from the reaction set using various coarse-graining strategies. We determine the effects on the steady-state probability distribution function and on the rate of fluctuation-driven switch flipping transitions. We find that protein-protein interactions may be safely eliminated from the reaction set, but protein-DNA interactions may not. We also find that it is important to use the chemical master equation rather than macroscopic rate equations to compute effective propensity functions for the coarse-grained reactions.

  3. Energy-efficient algorithm for classification of states of wireless sensor network using machine learning methods

    NASA Astrophysics Data System (ADS)

    Yuldashev, M. N.; Vlasov, A. I.; Novikov, A. N.

    2018-05-01

    This paper focuses on the development of an energy-efficient algorithm for classification of states of a wireless sensor network using machine learning methods. The proposed algorithm reduces energy consumption by: 1) elimination of monitoring of parameters that do not affect the state of the sensor network, 2) reduction of communication sessions over the network (the data are transmitted only if their values can affect the state of the sensor network). The studies of the proposed algorithm have shown that at classification accuracy close to 100%, the number of communication sessions can be reduced by 80%.

  4. Fetal ECG extraction using independent component analysis by Jade approach

    NASA Astrophysics Data System (ADS)

    Giraldo-Guzmán, Jader; Contreras-Ortiz, Sonia H.; Lasprilla, Gloria Isabel Bautista; Kotas, Marian

    2017-11-01

    Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.

  5. Identity method for particle number fluctuations and correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorenstein, M. I.

    An incomplete particle identification distorts the observed event-by-event fluctuations of the hadron chemical composition in nucleus-nucleus collisions. A new experimental technique called the identity method was recently proposed. It eliminated the misidentification problem for one specific combination of the second moments in a system of two hadron species. In the present paper, this method is extended to calculate all the second moments in a system with an arbitrary number of hadron species. Special linear combinations of the second moments are introduced. These combinations are presented in terms of single-particle variables and can be found experimentally from the event-by-event averaging. Themore » mathematical problem is then reduced to solving a system of linear equations. The effect of incomplete particle identification is fully eliminated from the final results.« less

  6. Correlation-coefficient-based fast template matching through partial elimination.

    PubMed

    Mahmood, Arif; Khan, Sohaib

    2012-04-01

    Partial computation elimination techniques are often used for fast template matching. At a particular search location, computations are prematurely terminated as soon as it is found that this location cannot compete with an already known best match location. Due to the nonmonotonic growth pattern of the correlation-based similarity measures, partial computation elimination techniques have been traditionally considered inapplicable to speed up these measures. In this paper, we show that partial elimination techniques may be applied to a correlation coefficient by using a monotonic formulation, and we propose basic-mode and extended-mode partial correlation elimination algorithms for fast template matching. The basic-mode algorithm is more efficient on small template sizes, whereas the extended mode is faster on medium and larger templates. We also propose a strategy to decide which algorithm to use for a given data set. To achieve a high speedup, elimination algorithms require an initial guess of the peak correlation value. We propose two initialization schemes including a coarse-to-fine scheme for larger templates and a two-stage technique for small- and medium-sized templates. Our proposed algorithms are exact, i.e., having exhaustive equivalent accuracy, and are compared with the existing fast techniques using real image data sets on a wide variety of template sizes. While the actual speedups are data dependent, in most cases, our proposed algorithms have been found to be significantly faster than the other algorithms.

  7. Identity method to study chemical fluctuations in relativistic heavy-ion collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gazdzicki, Marek; Grebieszkow, Katarzyna; Mackowiak, Maja

    Event-by-event fluctuations of the chemical composition of the hadronic final state of relativistic heavy-ion collisions carry valuable information on the properties of strongly interacting matter produced in the collisions. However, in experiments incomplete particle identification distorts the observed fluctuation signals. The effect is quantitatively studied and a new technique for measuring chemical fluctuations, the identity method, is proposed. The method fully eliminates the effect of incomplete particle identification. The application of the identity method to experimental data is explained.

  8. Prediction of micropollutant elimination during ozonation of a hospital wastewater effluent.

    PubMed

    Lee, Yunho; Kovalova, Lubomira; McArdell, Christa S; von Gunten, Urs

    2014-11-01

    Determining optimal ozone doses for organic micropollutant elimination during wastewater ozonation is challenged by the presence of a large number of structurally diverse micropollutants for varying wastewater matrice compositions. A chemical kinetics approach based on ozone and hydroxyl radical (·OH) rate constant and measurements of ozone and ·OH exposures is proposed to predict the micropollutant elimination efficiency. To further test and validate the chemical kinetics approach, the elimination efficiency of 25 micropollutants present in a hospital wastewater effluent from a pilot-scale membrane bioreactor (MBR) were determined at pH 7.0 and 8.5 in bench-scale experiments with ozone alone and ozone combined with H2O2 as a function of DOC-normalized specific ozone doses (gO3/gDOC). Furthermore, ozone and ·OH exposures, ·OH yields, and ·OH consumption rates were determined. Consistent eliminations as a function of gO3/gDOC were observed for micropollutants with similar ozone and ·OH rate constants. They could be classified into five groups having characteristic elimination patterns. By increasing the pH from 7.0 to 8.5, the elimination levels increased for the amine-containing micropollutants due to the increased apparent second-order ozone rate constants while decreased for most micropollutants due to the diminished ozone or ·OH exposures. Increased ·OH quenching by effluent organic matter and carbonate with increasing pH was responsible for the lower ·OH exposures. Upon H2O2 addition, the elimination levels of the micropollutants slightly increased at pH 7 (<8%) while decreased considerably at pH 8.5 (up to 31%). The elimination efficiencies of the selected micropollutants could be predicted based on their ozone and ·OH rate constants (predicted or taken from literature) and the determined ozone and ·OH exposures. Reasonable agreements between the measured and predicted elimination levels were found, demonstrating that the proposed chemical kinetics method can be used for a generalized prediction of micropollutant elimination during wastewater ozonation. Out of 67 analyzed micropollutants, 56 were present in the tested hospital wastewater effluent. Two-thirds of the present micropollutants were found to be ozone-reactive and efficiently eliminated at low ozone doses (e.g., >80% for gO3/gDOC = 0.5). Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Image Quality Ranking Method for Microscopy

    PubMed Central

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  10. Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem

    NASA Astrophysics Data System (ADS)

    Omagari, Hiroki; Higashino, Shin-Ichiro

    2018-04-01

    In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.

  11. Life Support Catalyst Regeneration Using Ionic Liquids and In Situ Resources

    NASA Technical Reports Server (NTRS)

    Abney, Morgan B.; Karr, Laurel J.; Paley, Mark S.; Donovan, David N.; Kramer, Teersa J.

    2016-01-01

    Oxygen recovery from metabolic carbon dioxide is an enabling capability for long-duration manned space flight. Complete recovery of oxygen (100%) involves the production of solid carbon. Catalytic approaches for this purpose, such as Bosch technology, have been limited in trade analyses due in part to the mass penalty for high catalyst resupply caused by carbon fouling of the iron or nickel catalyst. In an effort to mitigate this challenge, several technology approaches have been proposed. These approaches have included methods to prolong the life of the catalysts by increasing the total carbon mass loading per mass catalyst, methods for simplified catalyst introduction and removal to limit the resupply container mass, methods of using in situ resources, and methods to regenerate catalyst material. Research and development into these methods is ongoing, but only use of in situ resources and/or complete regeneration of catalyst material has the potential to entirely eliminate the need for resupply. The use of ionic liquids provides an opportunity to combine these methods in a technology approach designed to eliminate the need for resupply of oxygen recovery catalyst. Here we describe the results of an initial feasibility study using ionic liquids and in situ resources for life support catalyst regeneration, we discuss the key challenges with the approach, and we propose future efforts to advance the technology.

  12. Resolution-enhancement and sampling error correction based on molecular absorption line in frequency scanning interferometry

    NASA Astrophysics Data System (ADS)

    Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating

    2018-06-01

    The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.

  13. A fast positioning algorithm for the asymmetric dual Mach-Zehnder interferometric infrared fiber vibration sensor

    NASA Astrophysics Data System (ADS)

    Jiang, Junfeng; An, Jianchang; Liu, Kun; Ma, Chunyu; Li, Zhichen; Liu, Tiegen

    2017-09-01

    We propose a fast positioning algorithm for the asymmetric dual Mach-Zehnder interferometric infrared fiber vibration sensor. Using the approximately derivation method and the enveloping detection method, we successfully eliminate the asymmetry of the interference outputs and improve the processing speed. A positioning measurement experiment was carried out to verify the effectiveness of the proposed algorithm. At the sensing length of 85 km, the experimental results show that the mean positioning error is 18.9 m and the mean processing time is 116 ms. The processing speed is improved by 5 times compared to what can be achieved by using the traditional time-frequency analysis-based positioning method.

  14. Variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Wang, Hui; Wang, Liyong; Yu, Xiangzhou; Yang, Le

    2018-01-01

    The uneven illumination phenomenon reduces the quality of remote sensing image and causes interference in the subsequent processing and applications. A variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction is proposed. The L1 norm and the L2 norm are adopted to constrain the textures and details of reflectance image and the smoothness of the illumination image, respectively. The problem of separating the illumination image from the reflectance image is transformed into the optimal solution of the variational model. In order to accelerate the solution, the split Bregman method is used to decompose the variational model into three subproblems, which are calculated by alternate iteration. Two groups of experiments are implemented on two synthetic images and three real remote sensing images. Compared with the variational Retinex method with single-norm constraint and the Mask method, the proposed method performs better in both visual evaluation and quantitative measurements. The proposed method can effectively eliminate the uneven illumination while maintaining the textures and details of the remote sensing image. Moreover, the proposed method using split Bregman method is more than 10 times faster than the method with the steepest descent method.

  15. Research of infrared laser based pavement imaging and crack detection

    NASA Astrophysics Data System (ADS)

    Hong, Hanyu; Wang, Shu; Zhang, Xiuhua; Jing, Genqiang

    2013-08-01

    Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows that the proposed automatic extraction method is effective and advanced.

  16. Research on flight stability performance of rotor aircraft based on visual servo control method

    NASA Astrophysics Data System (ADS)

    Yu, Yanan; Chen, Jing

    2016-11-01

    control method based on visual servo feedback is proposed, which is used to improve the attitude of a quad-rotor aircraft and to enhance its flight stability. Ground target images are obtained by a visual platform fixed on aircraft. Scale invariant feature transform (SIFT) algorism is used to extract image feature information. According to the image characteristic analysis, fast motion estimation is completed and used as an input signal of PID flight control system to realize real-time status adjustment in flight process. Imaging tests and simulation results show that the method proposed acts good performance in terms of flight stability compensation and attitude adjustment. The response speed and control precision meets the requirements of actual use, which is able to reduce or even eliminate the influence of environmental disturbance. So the method proposed has certain research value to solve the problem of aircraft's anti-disturbance.

  17. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Hong -Hao; Wu, Xing -Gang; Ma, Yang

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach tomore » all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R e+e– at four-loop order in pQCD.« less

  18. Numerical method for the solution of large systems of differential equations of the boundary layer type

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Nachtsheim, P. R.

    1972-01-01

    A numerical method for the solution of large systems of nonlinear differential equations of the boundary-layer type is described. The method is a modification of the technique for satisfying asymptotic boundary conditions. The present method employs inverse interpolation instead of the Newton method to adjust the initial conditions of the related initial-value problem. This eliminates the so-called perturbation equations. The elimination of the perturbation equations not only reduces the user's preliminary work in the application of the method, but also reduces the number of time-consuming initial-value problems to be numerically solved at each iteration. For further ease of application, the solution of the overdetermined system for the unknown initial conditions is obtained automatically by applying Golub's linear least-squares algorithm. The relative ease of application of the proposed numerical method increases directly as the order of the differential-equation system increases. Hence, the method is especially attractive for the solution of large-order systems. After the method is described, it is applied to a fifth-order problem from boundary-layer theory.

  19. 75 FR 8759 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-25

    ... rule proposal methods. The FOCUS Report was designed to eliminate the overlapping regulatory reports... SECURITIES AND EXCHANGE COMMISSION [Rule 17a-5; SEC File No. 270-155; OMB Control No. 3235-0123... currently valid control number. Rule 17a-5 (17 CFR 240.17a-5) is the basic financial reporting rule for...

  20. Interspecies scaling: predicting volumes, mean residence time and elimination half-life. Some suggestions.

    PubMed

    Mahmood, I

    1998-05-01

    Extrapolation of animal data to assess pharmacokinetic parameters in man is an important tool in drug development. Clearance, volume of distribution and elimination half-life are the three most frequently extrapolated pharmacokinetic parameters. Extensive work has been done to improve the predictive performance of allometric scaling for clearance. In general there is good correlation between body weight and volume, hence volume in man can be predicted with reasonable accuracy from animal data. Besides the volume of distribution in the central compartment (Vc), two other volume terms, the volume of distribution by area (Vbeta) and the volume of distribution at steady state (VdSS), are also extrapolated from animals to man. This report compares the predictive performance of allometric scaling for Vc, Vbeta and VdSS in man from animal data. The relationship between elimination half-life (t(1/2)) and body weight across species results in poor correlation, most probably because of the hybrid nature of this parameter. To predict half-life in man from animal data, an indirect method (CL=VK, where CL=clearance, V is volume and K is elimination rate constant) has been proposed. This report proposes another indirect method which uses the mean residence time (MRT). After establishing that MRT can be predicted across species, it was used to predict half-life using the equation MRT=1.44 x t(1/2). The results of the study indicate that Vc is predicted more accurately than Vbeta and VdSS in man. It should be emphasized that for first-time dosing in man, Vc is a more important pharmacokinetic parameter than Vbeta or VdSS. Furthermore, MRT can be predicted reasonably well for man and can be used for prediction of half-life.

  1. A self-adaption compensation control for hysteresis nonlinearity in piezo-actuated stages based on Pi-sigma fuzzy neural network

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Zhou, Miaolei

    2018-04-01

    Piezo-actuated stages are widely applied in the high-precision positioning field nowadays. However, the inherent hysteresis nonlinearity in piezo-actuated stages greatly deteriorates the positioning accuracy of piezo-actuated stages. This paper first utilizes a nonlinear autoregressive moving average with exogenous inputs (NARMAX) model based on the Pi-sigma fuzzy neural network (PSFNN) to construct an online rate-dependent hysteresis model for describing the hysteresis nonlinearity in piezo-actuated stages. In order to improve the convergence rate of PSFNN and modeling precision, we adopt the gradient descent algorithm featuring three different learning factors to update the model parameters. The convergence of the NARMAX model based on the PSFNN is analyzed effectively. To ensure that the parameters can converge to the true values, the persistent excitation condition is considered. Then, a self-adaption compensation controller is designed for eliminating the hysteresis nonlinearity in piezo-actuated stages. A merit of the proposed controller is that it can directly eliminate the complex hysteresis nonlinearity in piezo-actuated stages without any inverse dynamic models. To demonstrate the effectiveness of the proposed model and control methods, a set of comparative experiments are performed on piezo-actuated stages. Experimental results show that the proposed modeling and control methods have excellent performance.

  2. Analysis and elimination method of the effects of cables on LVRT testing for offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Jiang, Zimin; Liu, Xiaohao; Li, Changgang; Liu, Yutian

    2018-02-01

    The current state, characteristics and necessity of the low voltage ride through (LVRT) on-site testing for grid-connected offshore wind turbines are introduced firstly. Then the effects of submarine cables on the LVRT testing are analysed based on the equivalent circuit of the testing system. A scheme for eliminating the effects of cables on the proposed LVRT testing method is presented. The specified voltage dips are guaranteed to be in compliance with the testing standards by adjusting the ratio between the current limiting impedance and short circuit impedance according to the steady voltage relationship derived from the equivalent circuit. Finally, simulation results demonstrate that the voltage dips at the high voltage side of wind turbine transformer satisfy the requirements of testing standards.

  3. A temperature compensation methodology for piezoelectric based sensor devices

    NASA Astrophysics Data System (ADS)

    Wang, Dong F.; Lou, Xueqiao; Bao, Aijian; Yang, Xu; Zhao, Ji

    2017-08-01

    A temperature compensation methodology comprising a negative temperature coefficient thermistor with the temperature characteristics of a piezoelectric material is proposed to improve the measurement accuracy of piezoelectric sensing based devices. The piezoelectric disk is characterized by using a disk-shaped structure and is also used to verify the effectiveness of the proposed compensation method. The measured output voltage shows a nearly linear relationship with respect to the applied pressure by introducing the proposed temperature compensation method in a temperature range of 25-65 °C. As a result, the maximum measurement accuracy is observed to be improved by 40%, and the higher the temperature, the more effective the method. The effective temperature range of the proposed method is theoretically analyzed by introducing the constant coefficient of the thermistor (B), the resistance of initial temperature (R0), and the paralleled resistance (Rx). The proposed methodology can not only eliminate the influence of piezoelectric temperature dependent characteristics on the sensing accuracy but also decrease the power consumption of piezoelectric sensing based devices by the simplified sensing structure.

  4. Summary Report of Laboratory Testing to Establish the Effectiveness of Proposed Treatment Methods for Unremediated and Remediated Nitrate Salt Waste Streams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anast, Kurt Roy; Funk, David John

    The inadvertent creation of transuranic waste carrying hazardous waste codes D001 and D002 requires the treatment of the material to eliminate the hazardous characteristics and allow its eventual shipment and disposal at the Waste Isolation Pilot Plant (WIPP). This report documents the effectiveness of two treatment methods proposed to stabilize both the unremediated and remediated nitrate salt waste streams (UNS and RNS, respectively). The two technologies include the addition of zeolite (with and without the addition of water as a processing aid) and cementation. Surrogates were developed to evaluate both the solid and liquid fractions expected from parent waste containers,more » and both the solid and liquid fractions were tested. Both technologies are shown to be effective at eliminating the characteristic of ignitability (D001), and the addition of zeolite was determined to be effective at eliminating corrosivity (D002), with the preferred option1 of zeolite addition currently planned for implementation at the Waste Characterization, Reduction, and Repackaging Facility. During the course of this work, we established the need to evaluate and demonstrate the effectiveness of the proposed remedy for debris material, if required. The evaluation determined that Wypalls absorbed with saturated nitrate salt solutions exhibit the ignitability characteristic (all other expected debris is not classified as ignitable). Follow-on studies will be developed to demonstrate the effectiveness of stabilization for ignitable Wypall debris. Finally, liquid surrogates containing saturated nitrate salts did not exhibit the characteristic of ignitability in their pure form (those neutralized with Kolorsafe and mixed with sWheat did exhibit D001). As a result, additional nitrate salt solutions (those exhibiting the oxidizer characteristic) will be tested to demonstrate the effectiveness of the remedy.« less

  5. Spatially adapted second-order total generalized variational image deblurring model under impulse noise

    NASA Astrophysics Data System (ADS)

    Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen

    2018-04-01

    Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.

  6. Deployment dynamics and control of large-scale flexible solar array system with deployable mast

    NASA Astrophysics Data System (ADS)

    Li, Hai-Quan; Liu, Xiao-Feng; Guo, Shao-Jing; Cai, Guo-Ping

    2016-10-01

    In this paper, deployment dynamics and control of large-scale flexible solar array system with deployable mast are investigated. The adopted solar array system is introduced firstly, including system configuration, deployable mast and solar arrays with several mechanisms. Then dynamic equation of the solar array system is established by the Jourdain velocity variation principle and a method for dynamics with topology changes is introduced. In addition, a PD controller with disturbance estimation is designed to eliminate the drift of spacecraft mainbody. Finally the validity of the dynamic model is verified through a comparison with ADAMS software and the deployment process and dynamic behavior of the system are studied in detail. Simulation results indicate that the proposed model is effective to describe the deployment dynamics of the large-scale flexible solar arrays and the proposed controller is practical to eliminate the drift of spacecraft mainbody.

  7. Optical image encryption using multilevel Arnold transform and noninterferometric imaging

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Chen, Xudong

    2011-11-01

    Information security has attracted much current attention due to the rapid development of modern technologies, such as computer and internet. We propose a novel method for optical image encryption using multilevel Arnold transform and rotatable-phase-mask noninterferometric imaging. An optical image encryption scheme is developed in the gyrator transform domain, and one phase-only mask (i.e., phase grating) is rotated and updated during image encryption. For the decryption, an iterative retrieval algorithm is proposed to extract high-quality plaintexts. Conventional encoding methods (such as digital holography) have been proven vulnerably to the attacks, and the proposed optical encoding scheme can effectively eliminate security deficiency and significantly enhance cryptosystem security. The proposed strategy based on the rotatable phase-only mask can provide a new alternative for data/image encryption in the noninterferometric imaging.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Berry, M. L..; Grieme, M.

    We propose a localization-based radiation source detection (RSD) algorithm using the Ratio of Squared Distance (ROSD) method. Compared with the triangulation-based method, the advantages of this ROSD method are multi-fold: i) source location estimates based on four detectors improve their accuracy, ii) ROSD provides closed-form source location estimates and thus eliminates the imaginary-roots issue, and iii) ROSD produces a unique source location estimate as opposed to two real roots (if any) in triangulation, and obviates the need to identify real phantom roots during clustering.

  9. Segmentation of liver region with tumorous tissues

    NASA Astrophysics Data System (ADS)

    Zhang, Xuejun; Lee, Gobert; Tajima, Tetsuji; Kitagawa, Teruhiko; Kanematsu, Masayuki; Zhou, Xiangrong; Hara, Takeshi; Fujita, Hiroshi; Yokoyama, Ryujiro; Kondo, Hiroshi; Hoshi, Hiroaki; Nawano, Shigeru; Shinozaki, Kenji

    2007-03-01

    Segmentation of an abnormal liver region based on CT or MR images is a crucial step in surgical planning. However, precisely carrying out this step remains a challenge due to either connectivities of the liver to other organs or the shape, internal texture, and homogeneity of liver that maybe extensively affected in case of liver diseases. Here, we propose a non-density based method for extracting the liver region containing tumor tissues by edge detection processing. False extracted regions are eliminated by a shape analysis method and thresholding processing. If the multi-phased images are available then the overall outcome of segmentation can be improved by subtracting two phase images, and the connectivities can be further eliminated by referring to the intensity on another phase image. Within an edge liver map, tumor candidates are identified by their different gray values relative to the liver. After elimination of the small and nonspherical over-extracted regions, the final liver region integrates the tumor region with the liver tissue. In our experiment, 40 cases of MDCT images were used and the result showed that our fully automatic method for the segmentation of liver region is effective and robust despite the presence of hepatic tumors within the liver.

  10. The elimination of zero-order diffraction of 10.6 μm infrared digital holography

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Yang, Chao

    2017-05-01

    A new method of eliminating the zero-order diffraction in infrared digital holography has been raised in this paper. Usually in the reconstruction of digital holography, the spatial frequency of the infrared thermal imager, such as microbolometer, cannot be compared to the common visible CCD or CMOS devices. The infrared imager suffers the problems of large pixel size and low spatial resolution, which cause the zero-order diffraction a severe influence of the reconstruction process of digital holograms. The zero-order diffraction has very large energy and occupies the central region in the spectrum domain. In this paper, we design a new filtering strategy to overcome this problem. This filtering strategy contains two kinds of filtering process which are the Gaussian low-frequency filter and the high-pass phase averaging filter. With the correct set of the calculating parameters, these filtering strategies can work effectively on the holograms and fully eliminate the zero-order diffraction, as well as the two crossover bars shown in the spectrum domain. Detailed explanation and discussion about the new method have been proposed in this paper, and the experiment results are also demonstrated to prove the performance of this method.

  11. Investigation of KDP crystal surface based on an improved bidimensional empirical mode decomposition method

    NASA Astrophysics Data System (ADS)

    Lu, Lei; Yan, Jihong; Chen, Wanqun; An, Shi

    2018-03-01

    This paper proposed a novel spatial frequency analysis method for the investigation of potassium dihydrogen phosphate (KDP) crystal surface based on an improved bidimensional empirical mode decomposition (BEMD) method. Aiming to eliminate end effects of the BEMD method and improve the intrinsic mode functions (IMFs) for the efficient identification of texture features, a denoising process was embedded in the sifting iteration of BEMD method. With removing redundant information in decomposed sub-components of KDP crystal surface, middle spatial frequencies of the cutting and feeding processes were identified. Comparative study with the power spectral density method, two-dimensional wavelet transform (2D-WT), as well as the traditional BEMD method, demonstrated that the method developed in this paper can efficiently extract texture features and reveal gradient development of KDP crystal surface. Furthermore, the proposed method was a self-adaptive data driven technique without prior knowledge, which overcame shortcomings of the 2D-WT model such as the parameters selection. Additionally, the proposed method was a promising tool for the application of online monitoring and optimal control of precision machining process.

  12. Lipophilic Super-Absorbent Swelling Gels as Cleaners for Use on Weapons Systems and Platforms

    DTIC Science & Technology

    2011-08-18

    polymer gel systems. Further research will address the post-cleaning gel removal method, the use of non- fluorinated compounds in gel synthesis, and...be proposed to address other issues including the method for removing the gels after swelling, the use of non- fluorinated compounds in gel...strength. Elimination of fluorinated compounds in the gel synthesis was the focus of this and subsequent phases of this research. TECHNICAL APPROACH

  13. Information filtering based on corrected redundancy-eliminating mass diffusion.

    PubMed

    Zhu, Xuzhen; Yang, Yujie; Chen, Guilin; Medo, Matus; Tian, Hui; Cai, Shi-Min

    2017-01-01

    Methods used in information filtering and recommendation often rely on quantifying the similarity between objects or users. The used similarity metrics often suffer from similarity redundancies arising from correlations between objects' attributes. Based on an unweighted undirected object-user bipartite network, we propose a Corrected Redundancy-Eliminating similarity index (CRE) which is based on a spreading process on the network. Extensive experiments on three benchmark data sets-Movilens, Netflix and Amazon-show that when used in recommendation, the CRE yields significant improvements in terms of recommendation accuracy and diversity. A detailed analysis is presented to unveil the origins of the observed differences between the CRE and mainstream similarity indices.

  14. 76 FR 81488 - Agency Information Collection Activities; Proposed Collection; Comment Request; National...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-28

    ... Activities; Proposed Collection; Comment Request; National Pollutant Discharge Elimination System (NPDES... viruses. For additional information about EPA's public docket, visit the EPA Docket Center homepage at... Pollutant Discharge Elimination System (NPDES) Program (Renewal). ICR Number: EPA ICR No. 0229.20, OMB...

  15. Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay

    NASA Astrophysics Data System (ADS)

    Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.

    1997-02-01

    A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.

  16. Research of facial feature extraction based on MMC

    NASA Astrophysics Data System (ADS)

    Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun

    2017-07-01

    Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.

  17. Motion estimation in the frequency domain using fuzzy c-planes clustering.

    PubMed

    Erdem, C E; Karabulut, G Z; Yanmaz, E; Anarim, E

    2001-01-01

    A recent work explicitly models the discontinuous motion estimation problem in the frequency domain where the motion parameters are estimated using a harmonic retrieval approach. The vertical and horizontal components of the motion are independently estimated from the locations of the peaks of respective periodogram analyses and they are paired to obtain the motion vectors using a procedure proposed. In this paper, we present a more efficient method that replaces the motion component pairing task and hence eliminates the problems of the pairing method described. The method described in this paper uses the fuzzy c-planes (FCP) clustering approach to fit planes to three-dimensional (3-D) frequency domain data obtained from the peaks of the periodograms. Experimental results are provided to demonstrate the effectiveness of the proposed method.

  18. A rule-based automatic sleep staging method.

    PubMed

    Liang, Sheng-Fu; Kuo, Chin-En; Hu, Yu-Han; Cheng, Yu-Shian

    2012-03-30

    In this paper, a rule-based automatic sleep staging method was proposed. Twelve features including temporal and spectrum analyses of the EEG, EOG, and EMG signals were utilized. Normalization was applied to each feature to eliminating individual differences. A hierarchical decision tree with fourteen rules was constructed for sleep stage classification. Finally, a smoothing process considering the temporal contextual information was applied for the continuity. The overall agreement and kappa coefficient of the proposed method applied to the all night polysomnography (PSG) of seventeen healthy subjects compared with the manual scorings by R&K rules can reach 86.68% and 0.79, respectively. This method can integrate with portable PSG system for sleep evaluation at-home in the near future. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm.

    PubMed

    Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C

    2005-10-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  20. SYSTEMS APPROACH TO RECOVERY AND REUSE OF ORGANIC MATERIAL FLOWS IN SANTA BARBARA COUNTY TO EXTRACT MAXIMUM VALUE AND ELIMINATE WASTE

    EPA Science Inventory

    The goal of the project is to calculate the net social, environmental, and economic benefits of a systems approach to organic waste and resource management in Santa Barbara County. To calculate these benefits, a comparative method was chosen of the proposed desi...

  1. The Other Side of Method Bias: The Perils of Distinct Source Research Designs

    ERIC Educational Resources Information Center

    Kammeyer-Mueller, John; Steel, Piers D. G.; Rubenstein, Alex

    2010-01-01

    Common source bias has been the focus of much attention. To minimize the problem, researchers have sometimes been advised to take measurements of predictors from one observer and measurements of outcomes from another observer or to use separate occasions of measurement. We propose that these efforts to eliminate biases due to common source…

  2. The differential path phase comparison method for determining pressure derivatives of elastic constants of solids

    NASA Astrophysics Data System (ADS)

    Peselnick, L.

    1982-08-01

    An ultrasonic method is presented which combines features of the differential path and the phase comparison methods. The proposed differential path phase comparison method, referred to as the `hybrid' method for brevity, eliminates errors resulting from phase changes in the bond between the sample and buffer rod. Define r(P) [and R(P)] as the square of the normalized frequency for cancellation of sample waves for shear [and for compressional] waves. Define N as the number of wavelengths in twice the sample length. The pressure derivatives r'(P) and R' (P) for samples of Alcoa 2024-T4 aluminum were obtained by using the phase comparison and the hybrid methods. The values of the pressure derivatives obtained by using the phase comparison method show variations by as much as 40% for small values of N (N < 50). The pressure derivatives as determined from the hybrid method are reproducible to within ±2% independent of N. The values of the pressure derivatives determined by the phase comparison method for large N are the same as those determined by the hybrid method. Advantages of the hybrid method are (1) no pressure dependent phase shift at the buffer-sample interface, (2) elimination of deviatoric stress in the sample portion of the sample assembly with application of hydrostatic pressure, and (3) operation at lower ultrasonic frequencies (for comparable sample lengths), which eliminates detrimental high frequency ultrasonic problems. A reduction of the uncertainties of the pressure derivatives of single crystals and of low porosity polycrystals permits extrapolation of such experimental data to deeper mantle depths.

  3. Wavelet-Based Artifact Identification and Separation Technique for EEG Signals during Galvanic Vestibular Stimulation

    PubMed Central

    Adib, Mani; Cretu, Edmond

    2013-01-01

    We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of −1.625 dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786

  4. 76 FR 76713 - California Independent System Operator Corporation; Notice of Technical Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-08

    ...) proposal to eliminate convergence bidding at intertie scheduling points. Take notice that such conference...'s proposal to eliminate convergence bidding at intertie scheduling points. A subsequent notice... Act of 1973. For accessibility accommodations please send an email to [email protected] or call...

  5. An imbalance fault detection method based on data normalization and EMD for marine current turbines.

    PubMed

    Zhang, Milu; Wang, Tianzhen; Tang, Tianhao; Benbouzid, Mohamed; Diallo, Demba

    2017-05-01

    This paper proposes an imbalance fault detection method based on data normalization and Empirical Mode Decomposition (EMD) for variable speed direct-drive Marine Current Turbine (MCT) system. The method is based on the MCT stator current under the condition of wave and turbulence. The goal of this method is to extract blade imbalance fault feature, which is concealed by the supply frequency and the environment noise. First, a Generalized Likelihood Ratio Test (GLRT) detector is developed and the monitoring variable is selected by analyzing the relationship between the variables. Then, the selected monitoring variable is converted into a time series through data normalization, which makes the imbalance fault characteristic frequency into a constant. At the end, the monitoring variable is filtered out by EMD method to eliminate the effect of turbulence. The experiments show that the proposed method is robust against turbulence through comparing the different fault severities and the different turbulence intensities. Comparison with other methods, the experimental results indicate the feasibility and efficacy of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Numerical tilting compensation in microscopy based on wavefront sensing using transport of intensity equation method

    NASA Astrophysics Data System (ADS)

    Hu, Junbao; Meng, Xin; Wei, Qi; Kong, Yan; Jiang, Zhilong; Xue, Liang; Liu, Fei; Liu, Cheng; Wang, Shouyu

    2018-03-01

    Wide-field microscopy is commonly used for sample observations in biological research and medical diagnosis. However, the tilting error induced by the oblique location of the image recorder or the sample, as well as the inclination of the optical path often deteriorates the imaging quality. In order to eliminate the tilting in microscopy, a numerical tilting compensation technique based on wavefront sensing using transport of intensity equation method is proposed in this paper. Both the provided numerical simulations and practical experiments prove that the proposed technique not only accurately determines the tilting angle with simple setup and procedures, but also compensates the tilting error for imaging quality improvement even in the large tilting cases. Considering its simple systems and operations, as well as image quality improvement capability, it is believed the proposed method can be applied for tilting compensation in the optical microscopy.

  7. Key management of the double random-phase-encoding method using public-key encryption

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2010-03-01

    Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.

  8. Detecting Signage and Doors for Blind Navigation and Wayfinding

    PubMed Central

    Wang, Shuihua; Yang, Xiaodong; Tian, Yingli

    2013-01-01

    Signage plays a very important role to find destinations in applications of navigation and wayfinding. In this paper, we propose a novel framework to detect doors and signage to help blind people accessing unfamiliar indoor environments. In order to eliminate the interference information and improve the accuracy of signage detection, we first extract the attended areas by using a saliency map. Then the signage is detected in the attended areas by using a bipartite graph matching. The proposed method can handle multiple signage detection. Furthermore, in order to provide more information for blind users to access the area associated with the detected signage, we develop a robust method to detect doors based on a geometric door frame model which is independent to door appearances. Experimental results on our collected datasets of indoor signage and doors demonstrate the effectiveness and efficiency of our proposed method. PMID:23914345

  9. Detecting Signage and Doors for Blind Navigation and Wayfinding.

    PubMed

    Wang, Shuihua; Yang, Xiaodong; Tian, Yingli

    2013-07-01

    Signage plays a very important role to find destinations in applications of navigation and wayfinding. In this paper, we propose a novel framework to detect doors and signage to help blind people accessing unfamiliar indoor environments. In order to eliminate the interference information and improve the accuracy of signage detection, we first extract the attended areas by using a saliency map. Then the signage is detected in the attended areas by using a bipartite graph matching. The proposed method can handle multiple signage detection. Furthermore, in order to provide more information for blind users to access the area associated with the detected signage, we develop a robust method to detect doors based on a geometric door frame model which is independent to door appearances. Experimental results on our collected datasets of indoor signage and doors demonstrate the effectiveness and efficiency of our proposed method.

  10. Analysis of Network Vulnerability Under Joint Node and Link Attacks

    NASA Astrophysics Data System (ADS)

    Li, Yongcheng; Liu, Shumei; Yu, Yao; Cao, Ting

    2018-03-01

    The security problem of computer network system is becoming more and more serious. The fundamental reason is that there are security vulnerabilities in the network system. Therefore, it’s very important to identify and reduce or eliminate these vulnerabilities before they are attacked. In this paper, we are interested in joint node and link attacks and propose a vulnerability evaluation method based on the overall connectivity of the network to defense this attack. Especially, we analyze the attack cost problem from the attackers’ perspective. The purpose is to find the set of least costs for joint links and nodes, and their deletion will lead to serious network connection damage. The simulation results show that the vulnerable elements obtained from the proposed method are more suitable for the attacking idea of the malicious persons in joint node and link attack. It is easy to find that the proposed method has more realistic protection significance.

  11. Automatic Detection of Pearlite Spheroidization Grade of Steel Using Optical Metallography.

    PubMed

    Chen, Naichao; Chen, Yingchao; Ai, Jun; Ren, Jianxin; Zhu, Rui; Ma, Xingchi; Han, Jun; Ma, Qingqian

    2016-02-01

    To eliminate the effect of subjective factors during manually determining the pearlite spheroidization grade of steel by analysis of optical metallography images, a novel method combining image mining and artificial neural networks (ANN) is proposed. The four co-occurrence matrices of angular second moment, contrast, correlation, and entropy are adopted to objectively characterize the images. ANN is employed to establish a mathematical model between the four co-occurrence matrices and the corresponding spheroidization grade. Three materials used in coal-fired power plants (ASTM A315-B steel, ASTM A335-P12 steel, and ASTM A355-P11 steel) were selected as the samples to test the validity of our proposed method. The results indicate that the accuracies of the calculated spheroidization grades reach 99.05, 95.46, and 93.63%, respectively. Hence, our newly proposed method is adequate for automatically detecting the pearlite spheroidization grade of steel using optical metallography.

  12. Differential phase-shift keying and channel equalization in free space optical communication system

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Wan, Xiongfeng; Xu, Chenlu

    2018-01-01

    We present the performance benefits of differential phase-shift keying (DPSK) modulation in eliminating influence from atmospheric turbulence, especially for coherent free space optical (FSO) communication with a high communication rate. Analytic expression of detected signal is derived, based on which, homodyne detection efficiency is calculated to indicate the performance of wavefront compensation. Considered laser pulses always suffer from atmospheric scattering effect by clouds, intersymbol interference (ISI) in high-speed FSO communication link is analyzed. Correspondingly, the channel equalization method of a binormalized modified constant modulus algorithm based on set-membership filtering (SM-BNMCMA) is proposed to solve the ISI problem. Finally, through the comparison with existing channel equalization methods, its performance benefits of both ISI elimination and convergence speed are verified. The research findings have theoretical significance in a high-speed FSO communication system.

  13. Matrix elimination method for the determination of precious metals in ores using electrothermal atomic absorption spectrometry.

    PubMed

    Salih, Bekir; Celikbiçak, Omür; Döker, Serhat; Doğan, Mehmet

    2007-03-28

    Poly(N-(hydroxymethyl)methacrylamide)-1-allyl-2-thiourea) hydrogels, poly(NHMMA-ATU), were synthesized by gamma radiation using (60)Co gamma source in the ternary mixture of NHMMA-ATU-H(2)O. These hydrogels were used for the specific gold, silver, platinum and palladium recovery, pre-concentration and matrix elimination from the solutions containing trace amounts of precious metal ions. Elimination of inorganic matrices such as different transition and heavy metal ions, and anions was performed by adjusting the solution pH to 0.5 that was the selective adsorption pH of the precious metal ions. Desorption of the precious metal ions was performed by using 0.8 M thiourea in 3M HCl as the most efficient desorbing agent with recovery values more than 95%. In the desorption medium, thiourea effect on the atomic signal was eliminated by selecting proper pyrolysis and atomization temperatures for all precious metal ions. Precision and the accuracy of the results were improved in the graphite furnace-atomic absorption spectrometer (GFAAS) measurements by applying the developed matrix elimination method performing the adsorption at pH 0.5. Pre-concentration factors of the studied precious metal ions were found to be at least 1000-fold. Detection limits of the precious metal ions were found to be less than 10 ng L(-1) of the all studied precious metal ions by using the proposed pre-concentration method. Determination of trace levels of the precious metals in the sea-water, anode slime, geological samples and photographic fixer solutions were performed using GFAAS clearly after applying the adsorption-desorption cycle onto the poly(NHMMA-UTU) hydrogels.

  14. 76 FR 79676 - California Independent System Operator Corporation; Supplemental Notice of Agenda and Discussion...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-22

    ... Operator Corporation's (CAISO) proposal to eliminate convergence bidding at intertie scheduling points.\\1... proposal to eliminate convergence bidding at intertie scheduling points in detail. No formal presentations... 1973. For accessibility accommodations please send an email to [email protected] or call toll free...

  15. Enzyme activity assay of glycoprotein enzymes based on a boronate affinity molecularly imprinted 96-well microplate.

    PubMed

    Bi, Xiaodong; Liu, Zhen

    2014-12-16

    Enzyme activity assay is an important method in clinical diagnostics. However, conventional enzyme activity assay suffers from apparent interference from the sample matrix. Herein, we present a new format of enzyme activity assay that can effectively eliminate the effects of the sample matrix. The key is a 96-well microplate modified with molecularly imprinted polymer (MIP) prepared according to a newly proposed method called boronate affinity-based oriented surface imprinting. Alkaline phosphatase (ALP), a glycoprotein enzyme that has been routinely used as an indicator for several diseases in clinical tests, was taken as a representative target enzyme. The prepared MIP exhibited strong affinity toward the template enzyme (with a dissociation constant of 10(-10) M) as well as superb tolerance for interference. Thus, the enzyme molecules in a complicated sample matrix could be specifically captured and cleaned up for enzyme activity assay, which eliminated the interference from the sample matrix. On the other hand, because the boronate affinity MIP could well retain the enzymatic activity of glycoprotein enzymes, the enzyme captured by the MIP was directly used for activity assay. Thus, additional assay time and possible enzyme or activity loss due to an enzyme release step required by other methods were avoided. Assay of ALP in human serum was successfully demonstrated, suggesting a promising prospect of the proposed method in real-world applications.

  16. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  17. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  18. An improved AE detection method of rail defect based on multi-level ANC with VSS-LMS

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Cui, Yiming; Wang, Yan; Sun, Mingjian; Hu, Hengshan

    2018-01-01

    In order to ensure the safety and reliability of railway system, Acoustic Emission (AE) method is employed to investigate rail defect detection. However, little attention has been paid to the defect detection at high speed, especially for noise interference suppression. Based on AE technology, this paper presents an improved rail defect detection method by multi-level ANC with VSS-LMS. Multi-level noise cancellation based on SANC and ANC is utilized to eliminate complex noises at high speed, and tongue-shaped curve with index adjustment factor is proposed to enhance the performance of variable step-size algorithm. Defect signals and reference signals are acquired by the rail-wheel test rig. The features of noise signals and defect signals are analyzed for effective detection. The effectiveness of the proposed method is demonstrated by comparing with the previous study, and different filter lengths are investigated to obtain a better noise suppression performance. Meanwhile, the detection ability of the proposed method is verified at the top speed of the test rig. The results clearly illustrate that the proposed method is effective in detecting rail defects at high speed, especially for noise interference suppression.

  19. Phase modulated high density collinear holographic data storage system with phase-retrieval reference beam locking and orthogonal reference encoding.

    PubMed

    Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Huang, Yong; Tan, Xiaodi

    2018-02-19

    A novel phase modulation method for holographic data storage with phase-retrieval reference beam locking is proposed and incorporated into an amplitude-encoding collinear holographic storage system. Unlike the conventional phase retrieval method, the proposed method locks the data page and the corresponding phase-retrieval interference beam together at the same location with a sequential recording process, which eliminates piezoelectric elements, phase shift arrays and extra interference beams, making the system more compact and phase retrieval easier. To evaluate our proposed phase modulation method, we recorded and then recovered data pages with multilevel phase modulation using two spatial light modulators experimentally. For 4-level, 8-level, and 16-level phase modulation, we achieved the bit error rate (BER) of 0.3%, 1.5% and 6.6% respectively. To further improve data storage density, an orthogonal reference encoding multiplexing method at the same position of medium is also proposed and validated experimentally. We increased the code rate of pure 3/16 amplitude encoding method from 0.5 up to 1.0 and 1.5 using 4-level and 8-level phase modulation respectively.

  20. Pareto Design of State Feedback Tracking Control of a Biped Robot via Multiobjective PSO in Comparison with Sigma Method and Genetic Algorithms: Modified NSGAII and MATLAB's Toolbox

    PubMed Central

    Mahmoodabadi, M. J.; Taherkhorsandi, M.; Bagheri, A.

    2014-01-01

    An optimal robust state feedback tracking controller is introduced to control a biped robot. In the literature, the parameters of the controller are usually determined by a tedious trial and error process. To eliminate this process and design the parameters of the proposed controller, the multiobjective evolutionary algorithms, that is, the proposed method, modified NSGAII, Sigma method, and MATLAB's Toolbox MOGA, are employed in this study. Among the used evolutionary optimization algorithms to design the controller for biped robots, the proposed method operates better in the aspect of designing the controller since it provides ample opportunities for designers to choose the most appropriate point based upon the design criteria. Three points are chosen from the nondominated solutions of the obtained Pareto front based on two conflicting objective functions, that is, the normalized summation of angle errors and normalized summation of control effort. Obtained results elucidate the efficiency of the proposed controller in order to control a biped robot. PMID:24616619

  1. Mechanistic Study of the Gas-Phase In-Source Hofmann Elimination of Doubly Quaternized Cinchona-Alkaloid Based Phase-Transfer Catalysts by (+)-Electrospray Ionization/Tandem Mass Spectrometry.

    PubMed

    Yang, Rong-Sheng; Sheng, Huaming; Lexa, Katrina W; Sherer, Edward C; Zhang, Li-Kang; Xiang, Bangping; Helmy, Roy; Mao, Bing

    2017-03-01

    An unusual in-source fragmentation pattern observed for 14 doubly quaternized cinchona alkaloid-based phase-transfer catalysts (PTC) was studied using (+)-ESI high resolution mass spectrometry. Loss of the substituted benzyl cation (R1 or R2) was found to be the major product ion [M 2+ - R 1 + or R 2 + ] + in MS spectra of all PTC compounds. A Hofmann elimination product ion [M - H] + was also observed. Only a small amount of the doubly charged M 2+ ions were observed in the MS spectra, likely due to strong Columbic repulsion between the two quaternary ammonium cations in the gas phase. The positive voltage in the MS inlet but not the ESI probe was found to induce this extensive fragmentation for all PTC diboromo-salts. Compound 1 was used as an example to illustrate the proposed in-source fragmentation mechanism. The mechanism of formation of the Hofmann elimination product ion [M - H] + was further investigated using HRMS/MS, H/D exchange, and DFT calculations. The proposed formation of 2b as the major Hofmann elimination product ion was supported both by HRMS/MS and DFT calculations. Formation of product ion 2b through a concerted unimolecular E i elimination pathway is proposed rather than a bimolecular E2 elimination pathway for common solution Hofmann eliminations. Graphical Abstract ᅟ.

  2. Mechanistic Study of the Gas-Phase In-Source Hofmann Elimination of Doubly Quaternized Cinchona-Alkaloid Based Phase-Transfer Catalysts by (+)-Electrospray Ionization/Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Yang, Rong-Sheng; Sheng, Huaming; Lexa, Katrina W.; Sherer, Edward C.; Zhang, Li-Kang; Xiang, Bangping; Helmy, Roy; Mao, Bing

    2017-03-01

    An unusual in-source fragmentation pattern observed for 14 doubly quaternized cinchona alkaloid-based phase-transfer catalysts (PTC) was studied using (+)-ESI high resolution mass spectrometry. Loss of the substituted benzyl cation (R1 or R2) was found to be the major product ion [M2+ - R1 + or R2 +]+ in MS spectra of all PTC compounds. A Hofmann elimination product ion [M - H]+ was also observed. Only a small amount of the doubly charged M2+ ions were observed in the MS spectra, likely due to strong Columbic repulsion between the two quaternary ammonium cations in the gas phase. The positive voltage in the MS inlet but not the ESI probe was found to induce this extensive fragmentation for all PTC diboromo-salts. Compound 1 was used as an example to illustrate the proposed in-source fragmentation mechanism. The mechanism of formation of the Hofmann elimination product ion [M - H]+ was further investigated using HRMS/MS, H/D exchange, and DFT calculations. The proposed formation of 2b as the major Hofmann elimination product ion was supported both by HRMS/MS and DFT calculations. Formation of product ion 2b through a concerted unimolecular Ei elimination pathway is proposed rather than a bimolecular E2 elimination pathway for common solution Hofmann eliminations.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madenoor Ramapriya, Gautham; Jiang, Zheyu; Tawarmalani, Mohit

    We propose a general method to consolidate distillation columns of a distillation configuration using heat and mass integration. The proposed method encompasses all heat and mass integrations known till date, and includes many more. Each heat and mass integration eliminates a distillation column, a condenser, a reboiler and the heat duty associated with a reboiler. Thus, heat and mass integration can potentially offer significant capital and operating cost benefits. In this talk, we will study the various possible heat and mass integrations in detail, and demonstrate their benefits using case studies. This work will lay out a framework to synthesizemore » an entire new class of useful configurations based on heat and mass integration of distillation columns.« less

  4. Electropherogram of capillary zone electrophoresis with effective mobility axis as a transverse axis and its analytical utility. I. Transformation applying the hypothetical electroosmotic flow.

    PubMed

    Ikuta, N; Yamada, Y; Hirokawa, T

    2000-01-01

    For capillary zone electrophoresis, a new method of transformation from migration time to effective mobility was proposed, in which the mobility increase due to Joule heating and the relaxation effect of the potential gradient were eliminated successfully. The precision of the mobility evaluated by the proposed transformation was discussed in relation to the analysis of rare earth ions. By using the transformation, almost the same pherograms could be obtained even from the pherograms obtained originally at different applied voltages.

  5. Incipient Fault Detection for Rolling Element Bearings under Varying Speed Conditions.

    PubMed

    Xue, Lang; Li, Naipeng; Lei, Yaguo; Li, Ningbo

    2017-06-20

    Varying speed conditions bring a huge challenge to incipient fault detection of rolling element bearings because both the change of speed and faults could lead to the amplitude fluctuation of vibration signals. Effective detection methods need to be developed to eliminate the influence of speed variation. This paper proposes an incipient fault detection method for bearings under varying speed conditions. Firstly, relative residual (RR) features are extracted, which are insensitive to the varying speed conditions and are able to reflect the degradation trend of bearings. Then, a health indicator named selected negative log-likelihood probability (SNLLP) is constructed to fuse a feature set including RR features and non-dimensional features. Finally, based on the constructed SNLLP health indicator, a novel alarm trigger mechanism is designed to detect the incipient fault. The proposed method is demonstrated using vibration signals from bearing tests and industrial wind turbines. The results verify the effectiveness of the proposed method for incipient fault detection of rolling element bearings under varying speed conditions.

  6. Automatic removal of eye-movement and blink artifacts from EEG signals.

    PubMed

    Gao, Jun Feng; Yang, Yong; Lin, Pan; Wang, Pei; Zheng, Chong Xun

    2010-03-01

    Frequent occurrence of electrooculography (EOG) artifacts leads to serious problems in interpreting and analyzing the electroencephalogram (EEG). In this paper, a robust method is presented to automatically eliminate eye-movement and eye-blink artifacts from EEG signals. Independent Component Analysis (ICA) is used to decompose EEG signals into independent components. Moreover, the features of topographies and power spectral densities of those components are extracted to identify eye-movement artifact components, and a support vector machine (SVM) classifier is adopted because it has higher performance than several other classifiers. The classification results show that feature-extraction methods are unsuitable for identifying eye-blink artifact components, and then a novel peak detection algorithm of independent component (PDAIC) is proposed to identify eye-blink artifact components. Finally, the artifact removal method proposed here is evaluated by the comparisons of EEG data before and after artifact removal. The results indicate that the method proposed could remove EOG artifacts effectively from EEG signals with little distortion of the underlying brain signals.

  7. Singular boundary method for wave propagation analysis in periodic structures

    NASA Astrophysics Data System (ADS)

    Fu, Zhuojia; Chen, Wen; Wen, Pihua; Zhang, Chuanzeng

    2018-07-01

    A strong-form boundary collocation method, the singular boundary method (SBM), is developed in this paper for the wave propagation analysis at low and moderate wavenumbers in periodic structures. The SBM is of several advantages including mathematically simple, easy-to-program, meshless with the application of the concept of origin intensity factors in order to eliminate the singularity of the fundamental solutions and avoid the numerical evaluation of the singular integrals in the boundary element method. Due to the periodic behaviors of the structures, the SBM coefficient matrix can be represented as a block Toeplitz matrix. By employing three different fast Toeplitz-matrix solvers, the computational time and storage requirements are significantly reduced in the proposed SBM analysis. To demonstrate the effectiveness of the proposed SBM formulation for wave propagation analysis in periodic structures, several benchmark examples are presented and discussed The proposed SBM results are compared with the analytical solutions, the reference results and the COMSOL software.

  8. An Improved Time-Frequency Analysis Method in Interference Detection for GNSS Receivers

    PubMed Central

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-01-01

    In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner–Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments. PMID:25905704

  9. Incipient Fault Detection for Rolling Element Bearings under Varying Speed Conditions

    PubMed Central

    Xue, Lang; Li, Naipeng; Lei, Yaguo; Li, Ningbo

    2017-01-01

    Varying speed conditions bring a huge challenge to incipient fault detection of rolling element bearings because both the change of speed and faults could lead to the amplitude fluctuation of vibration signals. Effective detection methods need to be developed to eliminate the influence of speed variation. This paper proposes an incipient fault detection method for bearings under varying speed conditions. Firstly, relative residual (RR) features are extracted, which are insensitive to the varying speed conditions and are able to reflect the degradation trend of bearings. Then, a health indicator named selected negative log-likelihood probability (SNLLP) is constructed to fuse a feature set including RR features and non-dimensional features. Finally, based on the constructed SNLLP health indicator, a novel alarm trigger mechanism is designed to detect the incipient fault. The proposed method is demonstrated using vibration signals from bearing tests and industrial wind turbines. The results verify the effectiveness of the proposed method for incipient fault detection of rolling element bearings under varying speed conditions. PMID:28773035

  10. Test pattern generation for ILA sequential circuits

    NASA Technical Reports Server (NTRS)

    Feng, YU; Frenzel, James F.; Maki, Gary K.

    1993-01-01

    An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.

  11. Fast generation of computer-generated holograms using wavelet shrinkage.

    PubMed

    Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-01-09

    Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

  12. A brain MRI bias field correction method created in the Gaussian multi-scale space

    NASA Astrophysics Data System (ADS)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  13. Statokinesigram normalization method.

    PubMed

    de Oliveira, José Magalhães

    2017-02-01

    Stabilometry is a technique that aims to study the body sway of human subjects, employing a force platform. The signal obtained from this technique refers to the position of the foot base ground-reaction vector, known as the center of pressure (CoP). The parameters calculated from the signal are used to quantify the displacement of the CoP over time; there is a large variability, both between and within subjects, which prevents the definition of normative values. The intersubject variability is related to differences between subjects in terms of their anthropometry, in conjunction with their muscle activation patterns (biomechanics); and the intrasubject variability can be caused by a learning effect or fatigue. Age and foot placement on the platform are also known to influence variability. Normalization is the main method used to decrease this variability and to bring distributions of adjusted values into alignment. In 1996, O'Malley proposed three normalization techniques to eliminate the effect of age and anthropometric factors from temporal-distance parameters of gait. These techniques were adopted to normalize the stabilometric signal by some authors. This paper proposes a new method of normalization of stabilometric signals to be applied in balance studies. The method was applied to a data set collected in a previous study, and the results of normalized and nonnormalized signals were compared. The results showed that the new method, if used in a well-designed experiment, can eliminate undesirable correlations between the analyzed parameters and the subjects' characteristics and show only the experimental conditions' effects.

  14. 77 FR 74902 - Self-Regulatory Organizations; BOX Options Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-18

    ... exchanges will not have a negative effect on BOX market participants and investors. In particular, the... Proposed Rule Change To Eliminate Market Maker Pre-Opening Obligations on BOX December 12, 2012. Pursuant... ``Exchange'') proposes to amend Rule 8050 (Market Maker Quotations) to eliminate market maker pre-opening...

  15. Information filtering based on corrected redundancy-eliminating mass diffusion

    PubMed Central

    Zhu, Xuzhen; Yang, Yujie; Chen, Guilin; Medo, Matus; Tian, Hui

    2017-01-01

    Methods used in information filtering and recommendation often rely on quantifying the similarity between objects or users. The used similarity metrics often suffer from similarity redundancies arising from correlations between objects’ attributes. Based on an unweighted undirected object-user bipartite network, we propose a Corrected Redundancy-Eliminating similarity index (CRE) which is based on a spreading process on the network. Extensive experiments on three benchmark data sets—Movilens, Netflix and Amazon—show that when used in recommendation, the CRE yields significant improvements in terms of recommendation accuracy and diversity. A detailed analysis is presented to unveil the origins of the observed differences between the CRE and mainstream similarity indices. PMID:28749976

  16. A conceptual design of shock-eliminating clover combustor for large scale scramjet engine

    NASA Astrophysics Data System (ADS)

    Sun, Ming-bo; Zhao, Yu-xin; Zhao, Guo-yan; Liu, Yuan

    2017-01-01

    A new concept of shock-eliminating clover combustor is proposed for large scale scramjet engine to fulfill the requirements of fuel penetration, total pressure recovery and cooling. To generate the circular-to-clover transition shape of the combustor, the streamline tracing technique is used based on an axisymmetric expansion parent flowfield calculated using the method of characteristics. The combustor is examined using inviscid and viscous numerical simulations and a pure circular shape is calculated for comparison. The results showed that the combustor avoids the shock wave generation and produces low total pressure losses in a wide range of flight condition with various Mach number. The flameholding device for this combustor is briefly discussed.

  17. Images Encryption Method using Steganographic LSB Method, AES and RSA algorithm

    NASA Astrophysics Data System (ADS)

    Moumen, Abdelkader; Sissaoui, Hocine

    2017-03-01

    Vulnerability of communication of digital images is an extremely important issue nowadays, particularly when the images are communicated through insecure channels. To improve communication security, many cryptosystems have been presented in the image encryption literature. This paper proposes a novel image encryption technique based on an algorithm that is faster than current methods. The proposed algorithm eliminates the step in which the secrete key is shared during the encryption process. It is formulated based on the symmetric encryption, asymmetric encryption and steganography theories. The image is encrypted using a symmetric algorithm, then, the secret key is encrypted by means of an asymmetrical algorithm and it is hidden in the ciphered image using a least significant bits steganographic scheme. The analysis results show that while enjoying the faster computation, our method performs close to optimal in terms of accuracy.

  18. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    PubMed

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  19. Motion-compensated speckle tracking via particle filtering

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu

    2015-07-01

    Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.

  20. An Improved Image Matching Method Based on Surf Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.

    2018-04-01

    Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.

  1. Automatic generation of the non-holonomic equations of motion for vehicle stability analysis

    NASA Astrophysics Data System (ADS)

    Minaker, B. P.; Rieveley, R. J.

    2010-09-01

    The mathematical analysis of vehicle stability has been utilised as an important tool in the design, development, and evaluation of vehicle architectures and stability controls. This paper presents a novel method for automatic generation of the linearised equations of motion for mechanical systems that is well suited to vehicle stability analysis. Unlike conventional methods for generating linearised equations of motion in standard linear second order form, the proposed method allows for the analysis of systems with non-holonomic constraints. In the proposed method, the algebraic constraint equations are eliminated after linearisation and reduction to first order. The described method has been successfully applied to an assortment of classic dynamic problems of varying complexity including the classic rolling coin, the planar truck-trailer, and the bicycle, as well as in more recent problems such as a rotor-stator and a benchmark road vehicle with suspension. This method has also been applied in the design and analysis of a novel three-wheeled narrow tilting vehicle with zero roll-stiffness. An application for determining passively stable configurations using the proposed method together with a genetic search algorithm is detailed. The proposed method and software implementation has been shown to be robust and provides invaluable conceptual insight into the stability of vehicles and mechanical systems.

  2. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays.

    PubMed

    Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin

    2008-08-20

    An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.

  3. Terminal Sliding Mode-Based Consensus Tracking Control for Networked Uncertain Mechanical Systems on Digraphs.

    PubMed

    Chen, Gang; Song, Yongduan; Guan, Yanfeng

    2018-03-01

    This brief investigates the finite-time consensus tracking control problem for networked uncertain mechanical systems on digraphs. A new terminal sliding-mode-based cooperative control scheme is developed to guarantee that the tracking errors converge to an arbitrarily small bound around zero in finite time. All the networked systems can have different dynamics and all the dynamics are unknown. A neural network is used at each node to approximate the local unknown dynamics. The control schemes are implemented in a fully distributed manner. The proposed control method eliminates some limitations in the existing terminal sliding-mode-based consensus control methods and extends the existing analysis methods to the case of directed graphs. Simulation results on networked robot manipulators are provided to show the effectiveness of the proposed control algorithms.

  4. Simulation tests of the optimization method of Hopfield and Tank using neural networks

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.

    1988-01-01

    The method proposed by Hopfield and Tank for using the Hopfield neural network with continuous valued neurons to solve the traveling salesman problem is tested by simulation. Several researchers have apparently been unable to successfully repeat the numerical simulation documented by Hopfield and Tank. However, as suggested to the author by Adams, it appears that the reason for those difficulties is that a key parameter value is reported erroneously (by four orders of magnitude) in the original paper. When a reasonable value is used for that parameter, the network performs generally as claimed. Additionally, a new method of using feedback to control the input bias currents to the amplifiers is proposed and successfully tested. This eliminates the need to set the input currents by trial and error.

  5. Quantum chemical approach for condensed-phase thermochemistry (V): Development of rigid-body type harmonic solvation model

    NASA Astrophysics Data System (ADS)

    Tarumi, Moto; Nakai, Hiromi

    2018-05-01

    This letter proposes an approximate treatment of the harmonic solvation model (HSM) assuming the solute to be a rigid body (RB-HSM). The HSM method can appropriately estimate the Gibbs free energy for condensed phases even where an ideal gas model used by standard quantum chemical programs fails. The RB-HSM method eliminates calculations for intra-molecular vibrations in order to reduce the computational costs. Numerical assessments indicated that the RB-HSM method can evaluate entropies and internal energies with the same accuracy as the HSM method but with lower calculation costs.

  6. A real-time TV logo tracking method using template matching

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Sang, Xinzhu; Yan, Binbin; Leng, Junmin

    2012-11-01

    A fast and accurate TV Logo detection method is presented based on real-time image filtering, noise eliminating and recognition of image features including edge and gray level information. It is important to accurately extract the optical template using the time averaging method from the sample video stream, and then different templates are used to match different logos in separated video streams with different resolution based on the topology features of logos. 12 video streams with different logos are used to verify the proposed method, and the experimental result demonstrates that the achieved accuracy can be up to 99%.

  7. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    This paper discusses the application of parameter estimation to highly unstable aircraft. It includes a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  8. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    The application of parameter estimation to highly unstable aircraft is discussed. Included are a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  9. INSTRUMENTS AND METHODS OF INVESTIGATION: Spectral and spectral-frequency methods of investigating atmosphereless bodies of the Solar system

    NASA Astrophysics Data System (ADS)

    Busarev, Vladimir V.; Prokof'eva-Mikhailovskaya, Valentina V.; Bochkov, Valerii V.

    2007-06-01

    A method of reflectance spectrophotometry of atmosphereless bodies of the Solar system, its specificity, and the means of eliminating basic spectral noise are considered. As a development, joining the method of reflectance spectrophotometry with the frequency analysis of observational data series is proposed. The combined spectral-frequency method allows identification of formations with distinctive spectral features, and estimations of their sizes and distribution on the surface of atmospherelss celestial bodies. As applied to investigations of asteroids 21 Lutetia and 4 Vesta, the spectral frequency method has given us the possibility of obtaining fundamentally new information about minor planets.

  10. Spectrophotometric total reducing sugars assay based on cupric reduction.

    PubMed

    Başkan, Kevser Sözgen; Tütem, Esma; Akyüz, Esin; Özen, Seda; Apak, Reşat

    2016-01-15

    As the concentration of reducing sugars (RS) is controlled by European legislation for certain specific food and beverages, a simple and sensitive spectrophotometric method for the determination of RS in various food products is proposed. The method is based on the reduction of Cu(II) to Cu(I) with reducing sugars in alkaline medium in the presence of 2,9-dimethyl-1,10-phenanthroline (neocuproine: Nc), followed by the formation of a colored Cu(I)-Nc charge-transfer complex. All simple sugars tested had the linear regression equations with almost equal slope values. The proposed method was successfully applied to fresh apple juice, commercial fruit juices, milk, honey and onion juice. Interference effect of phenolic compounds in plant samples was eliminated by a solid phase extraction (SPE) clean-up process. The method was proven to have higher sensitivity and precision than the widely used dinitrosalicylic acid (DNS) colorimetric method. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Demodulation method for tilted fiber Bragg grating refractometer with high sensitivity

    NASA Astrophysics Data System (ADS)

    Pham, Xuantung; Si, Jinhai; Chen, Tao; Wang, Ruize; Yan, Lihe; Cao, Houjun; Hou, Xun

    2018-05-01

    In this paper, we propose a demodulation method for refractive index (RI) sensing with tilted fiber Bragg gratings (TFBGs). It operates by monitoring the TFBG cladding mode resonance "cut-off wavelengths." The idea of a "cut-off wavelength" and its determination method are introduced. The RI sensitivities of TFBGs are significantly enhanced in certain RI ranges by using our demodulation method. The temperature-induced cross sensitivity is eliminated. We also demonstrate a parallel-double-angle TFBG (PDTFBG), in which two individual TFBGs are inscribed in the fiber core in parallel using a femtosecond laser and a phase mask. The RI sensing range of the PDTFBG is significantly broader than that of a conventional single-angle TFBG. In addition, its RI sensitivity can reach 1023.1 nm/refractive index unit in the 1.4401-1.4570 RI range when our proposed demodulation method is used.

  12. Determining the number of chemical species in nuclear magnetic resonance data matrix by taking advantage of collinearity and noise.

    PubMed

    Wang, Wanping; Shao, Limin; Yuan, Bin; Zhang, Xu; Liu, Maili

    2018-08-31

    The number of chemical species is crucial in analyzing pulsed field gradient nuclear magnetic resonance spectral data. Any method to determine the number must handle the obstacles of collinearity and noise. Collinearity in pulsed field gradient NMR data poses a serious challenge to and fails many existing methods. A novel method is proposed by taking advantage of the two obstacles instead of eliminating them. In the proposed method, the determination is based on discriminating decay-profile-dominant eigenvectors from noise-dominant ones, and the discrimination is implemented with a novel low- and high-frequency energy ratio (LHFER). Its performance is validated with both simulated and experimental data. The method is mathematically rigorous, computationally efficient, and readily automated. It also has the potential to be applied to other types of data in which collinearity is fairly severe. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. A novel star extraction method based on modified water flow model

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Ouyang, Zibiao; Yang, Yanqiang

    2017-11-01

    Star extraction is the essential procedure for attitude measurement of star sensor. The great challenge for star extraction is to segment star area exactly from various noise and background. In this paper, a novel star extraction method based on Modified Water Flow Model(MWFM) is proposed. The star image is regarded as a 3D terrain. The morphology is adopted for noise elimination and Tentative Star Area(TSA) selection. Star area can be extracted through adaptive water flowing within TSAs. This method can achieve accurate star extraction with improved efficiency under complex conditions such as loud noise and uneven backgrounds. Several groups of different types of star images are processed using proposed method. Comparisons with existing methods are conducted. Experimental results show that MWFM performs excellently under different imaging conditions. The star extraction rate is better than 95%. The star centroid accuracy is better than 0.075 pixels. The time-consumption is also significantly reduced.

  14. Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel

    PubMed Central

    Akbari, Mohsen; Manesh, Mohsen Riahi

    2014-01-01

    In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725

  15. Robust pre-specified time synchronization of chaotic systems by employing time-varying switching surfaces in the sliding mode control scheme

    NASA Astrophysics Data System (ADS)

    Khanzadeh, Alireza; Pourgholi, Mahdi

    2016-08-01

    In the conventional chaos synchronization methods, the time at which two chaotic systems are synchronized, is usually unknown and depends on initial conditions. In this work based on Lyapunov stability theory a sliding mode controller with time-varying switching surfaces is proposed to achieve chaos synchronization at a pre-specified time for the first time. The proposed controller is able to synchronize chaotic systems precisely at any time when we want. Moreover, by choosing the time-varying switching surfaces in a way that the reaching phase is eliminated, the synchronization becomes robust to uncertainties and exogenous disturbances. Simulation results are presented to show the effectiveness of the proposed method of stabilizing and synchronizing chaotic systems with complete robustness to uncertainty and disturbances exactly at a pre-specified time.

  16. Adaptive estimation of state of charge and capacity with online identified battery model for vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria

    2016-11-01

    Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.

  17. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  18. Numerical method based on transfer function for eliminating water vapor noise from terahertz spectra.

    PubMed

    Huang, Y; Sun, P; Zhang, Z; Jin, C

    2017-07-10

    Water vapor noise in the air affects the accuracy of optical parameters extracted from terahertz (THz) time-domain spectroscopy. In this paper, a numerical method was proposed to eliminate water vapor noise from the THz spectra. According to the Van Vleck-Weisskopf function and the linear absorption spectrum of water molecules in the HITRAN database, we simulated the water vapor absorption spectrum and real refractive index spectrum with a particular line width. The continuum effect of water vapor molecules was also considered. Theoretical transfer function of a different humidity was constructed through the theoretical calculation of the water vapor absorption coefficient and the real refractive index. The THz signal of the Lacidipine sample containing water vapor background noise in the continuous frequency domain of 0.5-1.8 THz was denoised by use of the method. The results show that the optical parameters extracted from the denoised signal are closer to the optical parameters in the dry nitrogen environment.

  19. Brain early infarct detection using gamma correction extreme-level eliminating with weighting distribution.

    PubMed

    Teh, V; Sim, K S; Wong, E K

    2016-11-01

    According to the statistic from World Health Organization (WHO), stroke is one of the major causes of death globally. Computed tomography (CT) scan is one of the main medical diagnosis system used for diagnosis of ischemic stroke. CT scan provides brain images in Digital Imaging and Communication in Medicine (DICOM) format. The presentation of CT brain images is mainly relied on the window setting (window center and window width), which converts an image from DICOM format into normal grayscale format. Nevertheless, the ordinary window parameter could not deliver a proper contrast on CT brain images for ischemic stroke detection. In this paper, a new proposed method namely gamma correction extreme-level eliminating with weighting distribution (GCELEWD) is implemented to improve the contrast on CT brain images. GCELEWD is capable of highlighting the hypodense region for diagnosis of ischemic stroke. The performance of this new proposed technique, GCELEWD, is compared with four of the existing contrast enhancement technique such as brightness preserving bi-histogram equalization (BBHE), dualistic sub-image histogram equalization (DSIHE), extreme-level eliminating histogram equalization (ELEHE), and adaptive gamma correction with weighting distribution (AGCWD). GCELEWD shows better visualization for ischemic stroke detection and higher values with image quality assessment (IQA) module. SCANNING 38:842-856, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  20. Wiltech Component Cleaning and Refurbishment Facility CFC Elimination Plan at NASA Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Williamson, Steve; Aman, Bob; Aurigema, Andrew; Melendez, Orlando

    1999-01-01

    The Wiltech Component Cleaning & Refurbishment Facility (WT-CCRF) at NASA Kennedy Space Center performs precision cleaning on approximately 200,000 metallic and non metallic components every year. WT-CCRF has developed a CFC elimination plan consisting of aqueous cleaning and verification and an economical dual solvent strategy for alternative solvent solution. Aqueous Verification Methodologies were implemented two years ago on a variety of Ground Support Equipment (GSE) components and sampling equipment. Today, 50% of the current workload is verified using aqueous methods and 90% of the total workload is degreased aqueously using, Zonyl and Brulin surfactants in ultrasonic baths. An additional estimated 20% solvent savings could be achieved if the proposed expanded use of aqueous methods are approved. Aqueous cleaning has shown to be effective, environmentally friendly and economical (i.e.. cost of materials, equipment, facilities and labor).

  1. Refraction error correction for deformation measurement by digital image correlation at elevated temperature

    NASA Astrophysics Data System (ADS)

    Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji

    2017-03-01

    An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.

  2. Quantum chemical modeling of the inhibition mechanism of monoamine oxidase by oxazolidinone and analogous heterocyclic compounds.

    PubMed

    Erdem, Safiye Sağ; Özpınar, Gül Altınbaş; Boz, Ümüt

    2014-02-01

    Monoamine oxidase (MAO, EC 1.4.3.4) is responsible from the oxidation of a variety of amine neurotransmitters. MAO inhibitors are used for the treatment of depression or Parkinson's disease. They also inhibit the catabolism of dietary amines. According to one hypothesis, inactivation results from the formation of a covalent adduct to a cysteine residue in the enzyme. If the adduct is stable enough, the enzyme is inhibited for a long time. After a while, enzyme can turn to its active form as a result of adduct breakdown by β-elimination. In this study, the proposed inactivation mechanism was modeled and tested by quantum chemical calculations. Eight heterocyclic methylthioamine derivatives were selected to represent the proposed covalent adducts. Activation energies related to their β-elimination reactions were calculated using ab initio and density functional theory methods. Calculated activation energies were in good agreement with the relative stabilities of the hypothetical adducts predicted in the literature by enzyme inactivation measurements.

  3. Projection-based estimation and nonuniformity correction of sensitivity profiles in phased-array surface coils.

    PubMed

    Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook

    2007-03-01

    To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.

  4. Checkerboard II: An Analysis of Tax Effort, Equalization and Extraordinary Needs Aids

    ERIC Educational Resources Information Center

    Widerquist, Karl

    2001-01-01

    A proposal in the New York State Assembly in 2000 considered eliminating Tax Equalization Aid to school districts in order to fund the elimination of aid caps, called Transition Adjustment. In response to that proposal, this report examines the equalizing or disequalizing effects of three types of New York state aid to school…

  5. An Efficient Seam Elimination Method for UAV Images Based on Wallis Dodging and Gaussian Distance Weight Enhancement

    PubMed Central

    Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang

    2016-01-01

    The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations. PMID:27171091

  6. An Efficient Seam Elimination Method for UAV Images Based on Wallis Dodging and Gaussian Distance Weight Enhancement.

    PubMed

    Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang

    2016-05-10

    The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations.

  7. Online Doppler Effect Elimination Based on Unequal Time Interval Sampling for Wayside Acoustic Bearing Fault Detecting System

    PubMed Central

    Ouyang, Kesai; Lu, Siliang; Zhang, Shangbin; Zhang, Haibin; He, Qingbo; Kong, Fanrang

    2015-01-01

    The railway occupies a fairly important position in transportation due to its high speed and strong transportation capability. As a consequence, it is a key issue to guarantee continuous running and transportation safety of trains. Meanwhile, time consumption of the diagnosis procedure is of extreme importance for the detecting system. However, most of the current adopted techniques in the wayside acoustic defective bearing detector system (ADBD) are offline strategies, which means that the signal is analyzed after the sampling process. This would result in unavoidable time latency. Besides, the acquired acoustic signal would be corrupted by the Doppler effect because of high relative speed between the train and the data acquisition system (DAS). Thus, it is difficult to effectively diagnose the bearing defects immediately. In this paper, a new strategy called online Doppler effect elimination (ODEE) is proposed to remove the Doppler distortion online by the introduced unequal interval sampling scheme. The steps of proposed strategy are as follows: The essential parameters are acquired in advance. Then, the introduced unequal time interval sampling strategy is used to restore the Doppler distortion signal, and the amplitude of the signal is demodulated as well. Thus, the restored Doppler-free signal is obtained online. The proposed ODEE method has been employed in simulation analysis. Ultimately, the ODEE method is implemented in the embedded system for fault diagnosis of the train bearing. The results are in good accordance with the bearing defects, which verifies the good performance of the proposed strategy. PMID:26343657

  8. Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise.

    PubMed

    Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan

    2018-03-09

    This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method.

  9. Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise

    PubMed Central

    Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan

    2018-01-01

    This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method. PMID:29522499

  10. Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor

    NASA Astrophysics Data System (ADS)

    Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan

    2017-12-01

    Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.

  11. A new optimal seam method for seamless image stitching

    NASA Astrophysics Data System (ADS)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  12. Fast computation of the multivariable stability margin for real interrelated uncertain parameters

    NASA Technical Reports Server (NTRS)

    Sideris, Athanasios; Sanchez Pena, Ricardo S.

    1988-01-01

    A novel algorithm for computing the multivariable stability margin for checking the robust stability of feedback systems with real parametric uncertainty is proposed. This method eliminates the need for the frequency search involved in another given algorithm by reducing it to checking a finite number of conditions. These conditions have a special structure, which allows a significant improvement on the speed of computations.

  13. 75 FR 80870 - Self-Regulatory Organizations; Chicago Stock Exchange, Inc.; Notice of Filing and Order Granting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-23

    ... Proposed Rule Change To Eliminate the Validated Cross Trade Entry Functionality December 16, 2010. Pursuant... eliminate the Validated Cross Trade Entry Functionality for Exchange-registered Institutional Brokers. The... Brokers (``Institutional Brokers'') by eliminating the ability of an Institutional Broker to execute...

  14. 77 FR 32155 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Amending...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-31

    ... Company, Into and With NYSE Group, Inc. (``NYSE Group''), Thereby Eliminating Archipelago Holdings From...''), an intermediate holding company, into and with NYSE Group, Inc. (``NYSE Group''), thereby eliminating... company, into and with NYSE Group, thereby eliminating Archipelago Holdings from the ownership structure...

  15. 77 FR 12637 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing of Proposed Rule Change Amending...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-01

    ... Equities Definition of Approved Person To Exclude Foreign Affiliates, Eliminating the Application Process..., eliminate the application process for approved persons, and make related technical and conforming changes..., eliminate the application process for approved persons, and make related technical and conforming changes...

  16. 75 FR 26832 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-12

    ..., Inc. to Clarify, Eliminate, Revise, or Delete Certain Out-Dated or Obsolete Rules May 6, 2010... its rules by clarifying existing provisions, eliminating superfluous provisions, and revising or... rules in order to clarify existing provisions, eliminate superfluous provisions, delete certain out...

  17. 77 FR 20869 - Self-Regulatory Organizations; NYSE Amex LLC; Order Approving a Proposed Rule Change Amending the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-06

    ... Equities Definition of Approved Person To Exclude Foreign Affiliates, Eliminating the Application Process... Equities definition of approved person to exclude foreign affiliates, eliminate the application process for... amend the NYSE Amex Equities definition of ``approved person'' to exclude foreign affiliates, eliminate...

  18. 76 FR 44656 - Proposed Collection; Comment Request for Revenue Procedure(s)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-26

    ... disaffililiation and procedure to eliminate impediments to e-filing consolidated returns and reduce reporting... Eliminate Impediments to E-Filing Consolidated Returns and Reduce Reporting Requirements. OMB Number: 1545... eliminate impediments to the electronic filing of Federal income tax returns (e- filing) and to reduce the...

  19. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  20. A robust fingerprint matching algorithm based on compatibility of star structures

    NASA Astrophysics Data System (ADS)

    Cao, Jia; Feng, Jufu

    2009-10-01

    In fingerprint verification or identification systems, most minutiae-based matching algorithms suffered from the problems of non-linear distortion and missing or faking minutiae. Local structures such as triangle or k-nearest structure are widely used to reduce the impact of non-linear distortion, but are suffered from missing and faking minutiae. In our proposed method, star structure is used to present local structure. A star structure contains various number of minutiae, thus, it is more robust with missing and faking minutiae. Our method consists of four steps: 1) Constructing star structures at minutia level; 2) Computing similarity score for each structure pair, and eliminating impostor matched pairs which have the low scores. As it is generally assumed that there is only linear distortion in local area, the similarity is defined by rotation and shifting. 3) Voting for remained matched pairs according to the compatibility between them, and eliminating impostor matched pairs which gain few votes. The concept of compatibility is first introduced by Yansong Feng [4], the original definition is only based on triangles. We define the compatibility for star structures to adjust to our proposed algorithm. 4) Computing the matching score, based on the number of matched structures and their voting scores. The score also reflects the fact that, it should get higher score if minutiae match in more intensive areas. Experiments evaluated on FVC 2004 show both effectiveness and efficiency of our methods.

  1. Fast magnetic resonance imaging based on high degree total variation

    NASA Astrophysics Data System (ADS)

    Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng

    2018-04-01

    In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.

  2. 77 FR 29721 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing of Proposed Rule Change Amending...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-18

    ...''). The Exchange believes that, because SPX, SPXPM, and SPY options are ultimately derivative of the same... NYSE Amex Options Rule 904 To Eliminate Position Limits for Options on the SPDR[supreg] S&P 500[supreg... Change The Exchange proposes to amend Commentary .07 to NYSE Amex Options Rule 904 to eliminate position...

  3. 78 FR 34024 - Endangered Fish and Wildlife; Proposed Rule To Eliminate the Expiration Date Contained in the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-06

    ... expire December 9, 2013, unless the sunset clause is removed. NMFS seeks public comment on the Proposed Rule to eliminate the sunset clause and on metrics for assessing the long term costs and benefits of... additional cost to the affected public. Notwithstanding any other provision of the law, no person is required...

  4. 78 FR 51255 - Self-Regulatory Organizations; BATS Y-Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-20

    ... Rule Change To Eliminate Rule Related to CYCLE Routing August 14, 2013. Pursuant to Section 19(b)(1) of... the CYCLE Routing option, effective as of September 3, 2013. The text of the proposed rule change is... proposing to eliminate Rule 11.13(a)(3)(A), which is the provision that authorizes the CYCLE Routing option...

  5. An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information

    NASA Astrophysics Data System (ADS)

    Tsuruta, Masanobu; Masuyama, Shigeru

    We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.

  6. Mover Position Detection for PMTLM Based on Linear Hall Sensors through EKF Processing

    PubMed Central

    Yan, Leyang; Zhang, Hui; Ye, Peiqing

    2017-01-01

    Accurate mover position is vital for a permanent magnet tubular linear motor (PMTLM) control system. In this paper, two linear Hall sensors are utilized to detect the mover position. However, Hall sensor signals contain third-order harmonics, creating errors in mover position detection. To filter out the third-order harmonics, a signal processing method based on the extended Kalman filter (EKF) is presented. The limitation of conventional processing method is first analyzed, and then EKF is adopted to detect the mover position. In the EKF model, the amplitude of the fundamental component and the percentage of the harmonic component are taken as state variables, and they can be estimated based solely on the measured sensor signals. Then, the harmonic component can be calculated and eliminated. The proposed method has the advantages of faster convergence, better stability and higher accuracy. Finally, experimental results validate the effectiveness and superiority of the proposed method. PMID:28383505

  7. Effective quadrature formula in solving linear integro-differential equations of order two

    NASA Astrophysics Data System (ADS)

    Eshkuvatov, Z. K.; Kammuji, M.; Long, N. M. A. Nik; Yunus, Arif A. M.

    2017-08-01

    In this note, we solve general form of Fredholm-Volterra integro-differential equations (IDEs) of order 2 with boundary condition approximately and show that proposed method is effective and reliable. Initially, IDEs is reduced into integral equation of the third kind by using standard integration techniques and identity between multiple and single integrals then truncated Legendre series are used to estimate the unknown function. For the kernel integrals, we have applied Gauss-Legendre quadrature formula and collocation points are chosen as the roots of the Legendre polynomials. Finally, reduce the integral equations of the third kind into the system of algebraic equations and Gaussian elimination method is applied to get approximate solutions. Numerical examples and comparisons with other methods reveal that the proposed method is very effective and dominated others in many cases. General theory of existence of the solution is also discussed.

  8. Improvement of Automated Identification of the Heart Wall in Echocardiography by Suppressing Clutter Component

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2013-07-01

    For the facilitation of analysis and elimination of the operator dependence in estimating the myocardial function in echocardiography, we have previously developed a method for automated identification of the heart wall. However, there are misclassified regions because the magnitude-squared coherence (MSC) function of echo signals, which is one of the features in the previous method, is sensitively affected by the clutter components such as multiple reflection and off-axis echo from external tissue or the nearby myocardium. The objective of the present study is to improve the performance of automated identification of the heart wall. For this purpose, we proposed a method to suppress the effect of the clutter components on the MSC of echo signals by applying an adaptive moving target indicator (MTI) filter to echo signals. In vivo experimental results showed that the misclassified regions were significantly reduced using our proposed method in the longitudinal axis view of the heart.

  9. Fault diagnosis of motor bearing with speed fluctuation via angular resampling of transient sound signals

    NASA Astrophysics Data System (ADS)

    Lu, Siliang; Wang, Xiaoxian; He, Qingbo; Liu, Fang; Liu, Yongbin

    2016-12-01

    Transient signal analysis (TSA) has been proven an effective tool for motor bearing fault diagnosis, but has yet to be applied in processing bearing fault signals with variable rotating speed. In this study, a new TSA-based angular resampling (TSAAR) method is proposed for fault diagnosis under speed fluctuation condition via sound signal analysis. By applying the TSAAR method, the frequency smearing phenomenon is eliminated and the fault characteristic frequency is exposed in the envelope spectrum for bearing fault recognition. The TSAAR method can accurately estimate the phase information of the fault-induced impulses using neither complicated time-frequency analysis techniques nor external speed sensors, and hence it provides a simple, flexible, and data-driven approach that realizes variable-speed motor bearing fault diagnosis. The effectiveness and efficiency of the proposed TSAAR method are verified through a series of simulated and experimental case studies.

  10. Small target detection based on difference accumulation and Gaussian curvature under complex conditions

    NASA Astrophysics Data System (ADS)

    Zhang, He; Niu, Yanxiong; Zhang, Hao

    2017-12-01

    Small target detection is a significant subject in infrared search and track and other photoelectric imaging systems. The small target is imaged under complex conditions, which contains clouds, horizon and bright part. In this paper, a novel small target detection method is proposed based on difference accumulation, clustering and Gaussian curvature. Difference accumulation varies from regions. Therefore, after obtaining difference accumulations, clustering is applied to determine whether the pixel belongs to the heterogeneous region, and eliminate heterogeneous region. Then Gaussian curvature is used to separate target from the homogeneous region. Experiments are conducted for verification, along with comparisons to several other methods. The experimental results demonstrate that our method has an advantage of 1-2 orders of magnitude on SCRG and BSF than others. Given that the false alarm rate is 1, the detection probability can be approximately 0.9 by using proposed method.

  11. Epileptic Seizure Detection Based on Time-Frequency Images of EEG Signals using Gaussian Mixture Model and Gray Level Co-Occurrence Matrix Features.

    PubMed

    Li, Yang; Cui, Weigang; Luo, Meilin; Li, Ke; Wang, Lina

    2018-01-25

    The electroencephalogram (EEG) signal analysis is a valuable tool in the evaluation of neurological disorders, which is commonly used for the diagnosis of epileptic seizures. This paper presents a novel automatic EEG signal classification method for epileptic seizure detection. The proposed method first employs a continuous wavelet transform (CWT) method for obtaining the time-frequency images (TFI) of EEG signals. The processed EEG signals are then decomposed into five sub-band frequency components of clinical interest since these sub-band frequency components indicate much better discriminative characteristics. Both Gaussian Mixture Model (GMM) features and Gray Level Co-occurrence Matrix (GLCM) descriptors are then extracted from these sub-band TFI. Additionally, in order to improve classification accuracy, a compact feature selection method by combining the ReliefF and the support vector machine-based recursive feature elimination (RFE-SVM) algorithm is adopted to select the most discriminative feature subset, which is an input to the SVM with the radial basis function (RBF) for classifying epileptic seizure EEG signals. The experimental results from a publicly available benchmark database demonstrate that the proposed approach provides better classification accuracy than the recently proposed methods in the literature, indicating the effectiveness of the proposed method in the detection of epileptic seizures.

  12. Effective Fingerprint Quality Estimation for Diverse Capture Sensors

    PubMed Central

    Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun

    2010-01-01

    Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632

  13. The Removal of EOG Artifacts From EEG Signals Using Independent Component Analysis and Multivariate Empirical Mode Decomposition.

    PubMed

    Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo

    2016-09-01

    The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.

  14. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  15. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  16. A Coarse Alignment Method Based on Digital Filters and Reconstructed Observation Vectors

    PubMed Central

    Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Wang, Zhicheng

    2017-01-01

    In this paper, a coarse alignment method based on apparent gravitational motion is proposed. Due to the interference of the complex situations, the true observation vectors, which are calculated by the apparent gravity, are contaminated. The sources of the interference are analyzed in detail, and then a low-pass digital filter is designed in this paper for eliminating the high-frequency noise of the measurement observation vectors. To extract the effective observation vectors from the inertial sensors’ outputs, a parameter recognition and vector reconstruction method are designed, where an adaptive Kalman filter is employed to estimate the unknown parameters. Furthermore, a robust filter, which is based on Huber’s M-estimation theory, is developed for addressing the outliers of the measurement observation vectors due to the maneuver of the vehicle. A comprehensive experiment, which contains a simulation test and physical test, is designed to verify the performance of the proposed method, and the results show that the proposed method is equivalent to the popular apparent velocity method in swaying mode, but it is superior to the current methods while in moving mode when the strapdown inertial navigation system (SINS) is under entirely self-contained conditions. PMID:28353682

  17. Analysis and control of the dynamical response of a higher order drifting oscillator

    PubMed Central

    Páez Chávez, Joseph; Pavlovskaia, Ekaterina; Wiercigroch, Marian

    2018-01-01

    This paper studies a position feedback control strategy for controlling a higher order drifting oscillator which could be used in modelling vibro-impact drilling. Special attention is given to two control issues, eliminating bistability and suppressing chaos, which may cause inefficient and unstable drilling. Numerical continuation methods implemented via the continuation platform COCO are adopted to investigate the dynamical response of the system. Our analyses show that the proposed controller is capable of eliminating coexisting attractors and mitigating chaotic behaviour of the system, providing that its feedback control gain is chosen properly. Our investigations also reveal that, when the slider’s property modelling the drilled formation changes, the rate of penetration for the controlled drilling can be significantly improved. PMID:29507508

  18. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  19. Analysis and control of the dynamical response of a higher order drifting oscillator

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Páez Chávez, Joseph; Pavlovskaia, Ekaterina; Wiercigroch, Marian

    2018-02-01

    This paper studies a position feedback control strategy for controlling a higher order drifting oscillator which could be used in modelling vibro-impact drilling. Special attention is given to two control issues, eliminating bistability and suppressing chaos, which may cause inefficient and unstable drilling. Numerical continuation methods implemented via the continuation platform COCO are adopted to investigate the dynamical response of the system. Our analyses show that the proposed controller is capable of eliminating coexisting attractors and mitigating chaotic behaviour of the system, providing that its feedback control gain is chosen properly. Our investigations also reveal that, when the slider's property modelling the drilled formation changes, the rate of penetration for the controlled drilling can be significantly improved.

  20. [New International Classification of Chronic Pancreatitis (M-ANNHEIM multifactor classification system, 2007): principles, merits, and demerits].

    PubMed

    Tsimmerman, Ia S

    2008-01-01

    The new International Classification of Chronic Pancreatitis (designated as M-ANNHEIM) proposed by a group of German specialists in late 2007 is reviewed. All its sections are subjected to analysis (risk group categories, clinical stages and phases, variants of clinical course, diagnostic criteria for "established" and "suspected" pancreatitis, instrumental methods and functional tests used in the diagnosis, evaluation of the severity of the disease using a scoring system, stages of elimination of pain syndrome). The new classification is compared with the earlier classification proposed by the author. Its merits and demerits are discussed.

  1. Blind motion image deblurring using nonconvex higher-order total variation model

    NASA Astrophysics Data System (ADS)

    Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo

    2016-09-01

    We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.

  2. A New Proposal to Redefine Kilogram by Measuring the Planck Constant Based on Inertial Mass

    NASA Astrophysics Data System (ADS)

    Liu, Yongmeng; Wang, Dawei

    2018-04-01

    A novel method to measure the Planck constant based on inertial mass is proposed here, which is distinguished from the conventional Kibble balance experiment which is based on the gravitational mass. The kilogram unit is linked to the Planck constant by calculating the difference of the parameters, i.e. resistance, voltage, velocity and time, which is measured in a two-mode experiment, unloaded mass mode and the loaded mass mode. In principle, all parameters measured in this experiment can reach a high accuracy, as that in Kibble balance experiment. This method has an advantage that some systematic error can be eliminated in difference calculation of measurements. In addition, this method is insensitive to air buoyancy and the alignment work in this experiment is easy. At last, the initial design of the apparatus is presented.

  3. Superpixel-based segmentation of glottal area from videolaryngoscopy images

    NASA Astrophysics Data System (ADS)

    Turkmen, H. Irem; Albayrak, Abdulkadir; Karsligil, M. Elif; Kocak, Ismail

    2017-11-01

    Segmentation of the glottal area with high accuracy is one of the major challenges for the development of systems for computer-aided diagnosis of vocal-fold disorders. We propose a hybrid model combining conventional methods with a superpixel-based segmentation approach. We first employed a superpixel algorithm to reveal the glottal area by eliminating the local variances of pixels caused by bleedings, blood vessels, and light reflections from mucosa. Then, the glottal area was detected by exploiting a seeded region-growing algorithm in a fully automatic manner. The experiments were conducted on videolaryngoscopy images obtained from both patients having pathologic vocal folds as well as healthy subjects. Finally, the proposed hybrid approach was compared with conventional region-growing and active-contour model-based glottal area segmentation algorithms. The performance of the proposed method was evaluated in terms of segmentation accuracy and elapsed time. The F-measure, true negative rate, and dice coefficients of the hybrid method were calculated as 82%, 93%, and 82%, respectively, which are superior to the state-of-art glottal-area segmentation methods. The proposed hybrid model achieved high success rates and robustness, making it suitable for developing a computer-aided diagnosis system that can be used in clinical routines.

  4. Quantifying noise in optical tweezers by allan variance.

    PubMed

    Czerwinski, Fabian; Richardson, Andrew C; Oddershede, Lene B

    2009-07-20

    Much effort is put into minimizing noise in optical tweezers experiments because noise and drift can mask fundamental behaviours of, e.g., single molecule assays. Various initiatives have been taken to reduce or eliminate noise but it has been difficult to quantify their effect. We propose to use Allan variance as a simple and efficient method to quantify noise in optical tweezers setups.We apply the method to determine the optimal measurement time, frequency, and detection scheme, and quantify the effect of acoustic noise in the lab. The method can also be used on-the-fly for determining optimal parameters of running experiments.

  5. [Treatment of metaphyseal fractures of shin bones by the method of blocking osteosynthesis].

    PubMed

    Neverov, V A; Khromov, A A; Cherniaev, S N; Egorov, K S; Shebarshov, A L

    2008-01-01

    The proposed method of reposition and polyaxial stabilization of fragments for intramedullary meallosynthesis of fractures of long tubular bones allows blocking osteosynthesis to be successfully used in treatment of complex metaphyseal fractures of shin bones. It results in strong fixation of the fragments, makes it possible to successfully eliminate residual deformities after introduction of the nail and to avoid the development of them in future under the influence of loading. The method provides early functioning of the interfacing joints, early axial loading, shorter period of disability, the absence of external immobilization.

  6. Partial and total actuator faults accommodation for input-affine nonlinear process plants.

    PubMed

    Mihankhah, Amin; Salmasi, Farzad R; Salahshoor, Karim

    2013-05-01

    In this paper, a new fault-tolerant control system is proposed for input-affine nonlinear plants based on Model Reference Adaptive System (MRAS) structure. The proposed method has the capability to accommodate both partial and total actuator failures along with bounded external disturbances. In this methodology, the conventional MRAS control law is modified by augmenting two compensating terms. One of these terms is added to eliminate the nonlinear dynamic, while the other is reinforced to compensate the distractive effects of the total actuator faults and external disturbances. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed method. Moreover, the control structure has good robustness capability against the parameter variation. The performance of this scheme is evaluated using a CSTR system and the results were satisfactory. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Design of an adaptive super-twisting decoupled terminal sliding mode control scheme for a class of fourth-order systems.

    PubMed

    Ashtiani Haghighi, Donya; Mobayen, Saleh

    2018-04-01

    This paper proposes an adaptive super-twisting decoupled terminal sliding mode control technique for a class of fourth-order systems. The adaptive-tuning law eliminates the requirement of the knowledge about the upper bounds of external perturbations. Using the proposed control procedure, the state variables of cart-pole system are converged to decoupled terminal sliding surfaces and their equilibrium points in the finite time. Moreover, via the super-twisting algorithm, the chattering phenomenon is avoided without affecting the control performance. The numerical results demonstrate the high stabilization accuracy and lower performance indices values of the suggested method over the other ones. The simulation results on the cart-pole system as well as experimental validations demonstrate that the proposed control technique exhibits a reasonable performance in comparison with the other methods. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts

    PubMed Central

    Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.

    2013-01-01

    To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080

  9. 78 FR 21650 - Self-Regulatory Organizations; NYSE Arca, Inc.; Order Granting Approval of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-11

    ... NYSE Group, Inc., Thereby Eliminating NYSE Arca Holdings, Inc. From the Ownership Structure of the... and with NYSE Group, Inc. (``NYSE Group''), thereby eliminating NYSE Arca Holdings from the ownership... rule changes.\\5\\ According to the Exchange, the reason for the Merger is to eliminate an unnecessary...

  10. 76 FR 408 - Self-Regulatory Organizations; Fixed Income Clearing Corporation; Order Approving Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-04

    ... Eliminate Certain Cash Adjustments Currently Processed by the MBSD December 28, 2010. I. Introduction On..., 2010), 75 FR 70328. II. Description FICC is eliminating the cash adjustments that are currently... from an initial $50,000 per million to the current amount of $100 per million. MBSD is eliminating this...

  11. Modal testing with Asher's method using a Fourier analyzer and curve fitting

    NASA Technical Reports Server (NTRS)

    Gold, R. R.; Hallauer, W. L., Jr.

    1979-01-01

    An unusual application of the method proposed by Asher (1958) for structural dynamic and modal testing is discussed. Asher's method has the capability, using the admittance matrix and multiple-shaker sinusoidal excitation, of separating structural modes having indefinitely close natural frequencies. The present application uses Asher's method in conjunction with a modern Fourier analyzer system but eliminates the necessity of exciting the test structure simultaneously with several shakers. Evaluation of this approach with numerically simulated data demonstrated its effectiveness; the parameters of two modes having almost identical natural frequencies were accurately identified. Laboratory evaluation of this approach was inconclusive because of poor experimental input data.

  12. International Roughness Index (IRI) measurement using Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Zhang, Wenjin; Wang, Ming L.

    2018-03-01

    International Roughness Index (IRI) is an important metric to measure condition of roadways. This index is usually used to justify the maintenance priority and scheduling for roadways. Various inspection methods and algorithms are used to assess this index through the use of road profiles. This study proposes to calculate IRI values using Hilbert-Huang Transform (HHT) algorithm. In particular, road profile data is provided using surface radar attached to a vehicle driving at highway speed. Hilbert-Huang transform (HHT) is used in this study because of its superior properties for nonstationary and nonlinear data. Empirical mode decomposition (EMD) processes the raw data into a set of intrinsic mode functions (IMFs), representing various dominating frequencies. These various frequencies represent noises from the body of the vehicle, sensor location, and the excitation induced by nature frequency of the vehicle, etc. IRI calculation can be achieved by eliminating noises that are not associated with the road profile including vehicle inertia effect. The resulting IRI values are compared favorably to the field IRI values, where the filtered IMFs captures the most characteristics of road profile while eliminating noises from the vehicle and the vehicle inertia effect. Therefore, HHT is an effect method for road profile analysis and for IRI measurement. Furthermore, the application of HHT method has the potential to eliminate the use of accelerometers attached to the vehicle as part of the displacement measurement used to offset the inertia effect.

  13. Toward a Theory of Assurance Case Confidence

    DTIC Science & Technology

    2012-09-01

    assurance case claim. The framework is based on the notion of eliminative induction—the princi- ple (first put forward by Francis Bacon ) that confidence in...eliminative induction. As first proposed by Francis Bacon [Schum 2001] and extended by L. Jonathan Cohen [Cohen 1970, 1977, 1989], eliminative induction is...eliminative in- duction—the principle (first put forward by Francis Bacon ) that confidence in the truth of a hypothesis (or claim) increases as reasons for

  14. A Method for Automatic Extracting Intracranial Region in MR Brain Image

    NASA Astrophysics Data System (ADS)

    Kurokawa, Keiji; Miura, Shin; Nishida, Makoto; Kageyama, Yoichi; Namura, Ikuro

    It is well known that temporal lobe in MR brain image is in use for estimating the grade of Alzheimer-type dementia. It is difficult to use only region of temporal lobe for estimating the grade of Alzheimer-type dementia. From the standpoint for supporting the medical specialists, this paper proposes a data processing approach on the automatic extraction of the intracranial region from the MR brain image. The method is able to eliminate the cranium region with the laplacian histogram method and the brainstem with the feature points which are related to the observations given by a medical specialist. In order to examine the usefulness of the proposed approach, the percentage of the temporal lobe in the intracranial region was calculated. As a result, the percentage of temporal lobe in the intracranial region on the process of the grade was in agreement with the visual sense standards of temporal lobe atrophy given by the medical specialist. It became clear that intracranial region extracted by the proposed method was good for estimating the grade of Alzheimer-type dementia.

  15. A Tensor-Based Subspace Approach for Bistatic MIMO Radar in Spatial Colored Noise

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang

    2014-01-01

    In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method. PMID:24573313

  16. A Temperature-Based Bioimpedance Correction for Water Loss Estimation During Sports.

    PubMed

    Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Mester, Joachim; Eskofier, Bjoern M

    2016-11-01

    The amount of total body water (TBW) can be estimated based on bioimpedance measurements of the human body. In sports, TBW estimations are of importance because mild water losses can impair muscular strength and aerobic endurance. Severe water losses can even be life threatening. TBW estimations based on bioimpedance, however, fail during sports because the increased body temperature corrupts bioimpedance measurements. Therefore, this paper proposes a machine learning method that eliminates the effects of increased temperature on bioimpedance and, consequently, reveals the changes in bioimpedance that are due to TBW loss. This is facilitated by utilizing changes in skin and core temperature. The method was evaluated in a study in which bioimpedance, temperature, and TBW loss were recorded every 15 min during a 2-h running workout. The evaluation demonstrated that the proposed method is able to reduce the error of TBW loss estimation by up to 71%, compared to the state of art. In the future, the proposed method in combination with portable bioimpedance devices might facilitate the development of wearable systems for continuous and noninvasive TBW loss monitoring during sports.

  17. A tensor-based subspace approach for bistatic MIMO radar in spatial colored noise.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Wang, Junxiang

    2014-02-25

    In this paper, a new tensor-based subspace approach is proposed to estimate the direction of departure (DOD) and the direction of arrival (DOA) for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise. Firstly, the received signals can be packed into a third-order measurement tensor by exploiting the inherent structure of the matched filter. Then, the measurement tensor can be divided into two sub-tensors, and a cross-covariance tensor is formulated to eliminate the spatial colored noise. Finally, the signal subspace is constructed by utilizing the higher-order singular value decomposition (HOSVD) of the cross-covariance tensor, and the DOD and DOA can be obtained through the estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, which are paired automatically. Since the multidimensional inherent structure and the cross-covariance tensor technique are used, the proposed method provides better angle estimation performance than Chen's method, the ESPRIT algorithm and the multi-SVD method. Simulation results confirm the effectiveness and the advantage of the proposed method.

  18. Transmitted wavefront testing with large dynamic range based on computer-aided deflectometry

    NASA Astrophysics Data System (ADS)

    Wang, Daodang; Xu, Ping; Gong, Zhidong; Xie, Zhongmin; Liang, Rongguang; Xu, Xinke; Kong, Ming; Zhao, Jun

    2018-06-01

    The transmitted wavefront testing technique is demanded for the performance evaluation of transmission optics and transparent glass, in which the achievable dynamic range is a key issue. A computer-aided deflectometric testing method with fringe projection is proposed for the accurate testing of transmitted wavefronts with a large dynamic range. Ray tracing of the modeled testing system is carried out to achieve the virtual ‘null’ testing of transmitted wavefront aberrations. The ray aberration is obtained from the ray tracing result and measured slope, with which the test wavefront aberration can be reconstructed. To eliminate testing system modeling errors, a system geometry calibration based on computer-aided reverse optimization is applied to realize accurate testing. Both numerical simulation and experiments have been carried out to demonstrate the feasibility and high accuracy of the proposed testing method. The proposed testing method can achieve a large dynamic range compared with the interferometric method, providing a simple, low-cost and accurate way for the testing of transmitted wavefronts from various kinds of optics and a large amount of industrial transmission elements.

  19. A Combined Methodology to Eliminate Artifacts in Multichannel Electrogastrogram Based on Independent Component Analysis and Ensemble Empirical Mode Decomposition.

    PubMed

    Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K

    2018-06-01

    Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.

  20. Recursive feature selection with significant variables of support vectors.

    PubMed

    Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh

    2012-01-01

    The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.

  1. Nuclear energy waste-space transportation and removal

    NASA Technical Reports Server (NTRS)

    Burns, R. E.

    1975-01-01

    A method for utilizing the decay heat of actinide wastes to power an electric thrust vehicle is proposed. The vehicle, launched by shuttle to earth orbit and to earth escape by a tug, obtains electrical power from the actinide waste heat by thermionic converters. The heavy gamma ray and neutron shielding which is necessary as a safety feature is removed in orbit and returned to earth for reuse. The problems associated with safety are dealt with in depth. A method for eliminating fission wastes via chemical propulsion is briefly discussed.

  2. Reasoning with case histories of process knowledge for efficient process development

    NASA Technical Reports Server (NTRS)

    Bharwani, Seraj S.; Walls, Joe T.; Jackson, Michael E.

    1988-01-01

    The significance of compiling case histories of empirical process knowledge and the role of such histories in improving the efficiency of manufacturing process development is discussed in this paper. Methods of representing important investigations as cases and using the information from such cases to eliminate redundancy of empirical investigations in analogous process development situations are also discussed. A system is proposed that uses such methods to capture the problem-solving framework of the application domain. A conceptual design of the system is presented and discussed.

  3. The epidemiological characteristics of measles and difficulties of measles elimination in Hang Zhou, China

    PubMed Central

    Liu, Shijun; Xu, Erping; Zhang, Xiaoping; Liu, Yan; Du, Jian; Wang, Jun; Che, Xinren; Gu, Wenwen

    2013-01-01

    Objective: Following the national proclaim of Measles Elimination 2012, plenty of activities for controlling the incidence had practiced in Hangzhou. However, the incidence did not decrease to low degree and remained perform as gap to the elimination target. The present study aimed to describe the epidemiological characteristics of measles, and proposed reasonable method to the target in Hangzhou.  Method: Cases were collected by the National Notifiable Diseases Surveillance System (NNDSS) from 2004 to 2011. The descriptive epidemiology was employed to analyze characteristics of measles.  Results: A total of 4712 confirmed cases were enrolled by the NNDSS with 7.87 per 100,000 people of incidence rate on average from 2004 to 2011. Individuals lived urban districts had higher risk of measles than counties. Infants aged <1 year observed the highest incidence rate with 239.35/100,000, and the age-specific incidence rate declined along with aged-group but reversed at adults. 52.20% of cases were floating cases and the measles vaccination was significantly different from the local cases (χ2=51.65,p <0.001). February to June was the epidemic period for measles incidence with 81.88% of cases reported in cluster.  Conclusion: The descriptive characteristics of measles suggested that factors included infant and adult individual, floating population, and living urban area might be relate to the elimination target. More efforts were need to ensure susceptible population had accepted qualified measles vaccination.  PMID:23732896

  4. Automatic correction of echo-planar imaging (EPI) ghosting artifacts in real-time interactive cardiac MRI using sensitivity encoding.

    PubMed

    Kim, Yoon-Chul; Nielsen, Jon-Fredrik; Nayak, Krishna S

    2008-01-01

    To develop a method that automatically corrects ghosting artifacts due to echo-misalignment in interleaved gradient-echo echo-planar imaging (EPI) in arbitrary oblique or double-oblique scan planes. An automatic ghosting correction technique was developed based on an alternating EPI acquisition and the phased-array ghost elimination (PAGE) reconstruction method. The direction of k-space traversal is alternated at every temporal frame, enabling lower temporal-resolution ghost-free coil sensitivity maps to be dynamically estimated. The proposed method was compared with conventional one-dimensional (1D) phase correction in axial, oblique, and double-oblique scan planes in phantom and cardiac in vivo studies. The proposed method was also used in conjunction with two-fold acceleration. The proposed method with nonaccelerated acquisition provided excellent suppression of ghosting artifacts in all scan planes, and was substantially more effective than conventional 1D phase correction in oblique and double-oblique scan planes. The feasibility of real-time reconstruction using the proposed technique was demonstrated in a scan protocol with 3.1-mm spatial and 60-msec temporal resolution. The proposed technique with nonaccelerated acquisition provides excellent ghost suppression in arbitrary scan orientations without a calibration scan, and can be useful for real-time interactive imaging, in which scan planes are frequently changed with arbitrary oblique orientations.

  5. Pose-free structure from motion using depth from motion constraints.

    PubMed

    Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G

    2011-10-01

    Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE

  6. A novel baseline-correction method for standard addition based derivative spectra and its application to quantitative analysis of benzo(a)pyrene in vegetable oil samples.

    PubMed

    Li, Na; Li, Xiu-Ying; Zou, Zhe-Xiang; Lin, Li-Rong; Li, Yao-Qun

    2011-07-07

    In the present work, a baseline-correction method based on peak-to-derivative baseline measurement was proposed for the elimination of complex matrix interference that was mainly caused by unknown components and/or background in the analysis of derivative spectra. This novel method was applicable particularly when the matrix interfering components showed a broad spectral band, which was common in practical analysis. The derivative baseline was established by connecting two crossing points of the spectral curves obtained with a standard addition method (SAM). The applicability and reliability of the proposed method was demonstrated through both theoretical simulation and practical application. Firstly, Gaussian bands were used to simulate 'interfering' and 'analyte' bands to investigate the effect of different parameters of interfering band on the derivative baseline. This simulation analysis verified that the accuracy of the proposed method was remarkably better than other conventional methods such as peak-to-zero, tangent, and peak-to-peak measurements. Then the above proposed baseline-correction method was applied to the determination of benzo(a)pyrene (BaP) in vegetable oil samples by second-derivative synchronous fluorescence spectroscopy. The satisfactory results were obtained by using this new method to analyze a certified reference material (coconut oil, BCR(®)-458) with a relative error of -3.2% from the certified BaP concentration. Potentially, the proposed method can be applied to various types of derivative spectra in different fields such as UV-visible absorption spectroscopy, fluorescence spectroscopy and infrared spectroscopy.

  7. Personalized cumulative UV tracking on mobiles & wearables.

    PubMed

    Dey, S; Sahoo, S; Agrawal, H; Mondal, A; Bhowmik, T; Tiwari, V N

    2017-07-01

    Maintaining a balanced Ultra Violet (UV) exposure level is vital for a healthy living as the excess of UV dose can lead to critical diseases such as skin cancer while the absence can cause vitamin D deficiency which has recently been linked to onset of cardiac abnormalities. Here, we propose a personalized cumulative UV dose (CUVD) estimation system for smartwatch and smartphone devices having the following novelty factors; (a) sensor orientation invariant measurement of UV exposure using a bootstrap resampling technique, (b) estimation of UV exposure using only light intensity (lux) sensor (c) optimal UV exposure dose estimation. Our proposed method will eliminate the need for a dedicated UV sensor thus widen the user base of the proposed solution, render it unobtrusive by eliminating the critical requirement of orienting the device in a direction facing the sun. The system is implemented on android mobile platform and validated on 1200 minutes of lux and UV index (UVI) data collected across several days covering morning to evening time frames. The result shows very impressive final UVI estimation accuracy. We believe our proposed solution will enable the future wearable and smartphone users to obtain a seamless personalized UV exposure dose across a day paving a way for simple yet very useful recommendations such as right skin protective measure for reducing risk factors of long term UV exposure related diseases like skin cancer and, cardiac abnormality.

  8. Image Mosaic Method Based on SIFT Features of Line Segment

    PubMed Central

    Zhu, Jun; Ren, Mingwu

    2014-01-01

    This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. PMID:24511326

  9. Merge measuring mesh for complex surface parts

    NASA Astrophysics Data System (ADS)

    Ye, Jianhua; Gao, Chenghui; Zeng, Shoujin; Xu, Mingsan

    2018-04-01

    Due to most parts self-occlude and limitation of scanner range, it is difficult to scan the entire part by one time. For modeling of part, multi measuring meshes need to be merged. In this paper, a new merge method is presented. At first, using the grid voxelization method to eliminate the most of non-overlap regions, and retrieval overlap triangles method by the topology of mesh is proposed due to its ability to improve the efficiency. Then, to remove the large deviation of overlap triangles, deleting by overlap distance is discussion. After that, this paper puts forward a new method of merger meshes by registration and combination mesh boundary point. Through experimental analysis, the suggested methods are effective.

  10. A Sequential Linear Quadratic Approach for Constrained Nonlinear Optimal Control with Adaptive Time Discretization and Application to Higher Elevation Mars Landing Problem

    NASA Astrophysics Data System (ADS)

    Sandhu, Amit

    A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.

  11. Rapid measurement and compensation method of eccentricity in automatic profile measurement of the ICF capsule.

    PubMed

    Li, Shaobai; Wang, Yun; Wang, Qi; Ma, Xianxian; Wang, Longxiao; Zhao, Weiqian; Zhang, Xusheng

    2018-05-10

    In this paper, we propose a new measurement and compensation method for the eccentricity of the inertial confinement fusion (ICF) capsule, which combines computer vision and the laser differential confocal method to align the capsule in rotation measurement. This technique measures the eccentricity of the capsule by obtaining the sub-pixel profile with a moment-based algorithm, then performs the preliminary alignment by the two-dimensional adjustment. Next, we use the laser differential confocal sensor to measure the height data of the equatorial surface of the capsule by turning it around, then obtain and compensate the remaining eccentricity ultimately. This method is a non-contact, automatic, rapid, high-precision measurement and compensation technique of eccentricity for the capsule. Theoretical analyses and preliminary experiments indicate that the maximum measurement range of eccentricity of this proposed method is 1.8 mm for the capsule with a diameter of 1 mm, and it could eliminate the eccentricity to less than 0.5 μm in 30 s.

  12. A data-driven approach for denoising GNSS position time series

    NASA Astrophysics Data System (ADS)

    Li, Yanyan; Xu, Caijun; Yi, Lei; Fang, Rongxin

    2017-12-01

    Global navigation satellite system (GNSS) datasets suffer from common mode error (CME) and other unmodeled errors. To decrease the noise level in GNSS positioning, we propose a new data-driven adaptive multiscale denoising method in this paper. Both synthetic and real-world long-term GNSS datasets were employed to assess the performance of the proposed method, and its results were compared with those of stacking filtering, principal component analysis (PCA) and the recently developed multiscale multiway PCA. It is found that the proposed method can significantly eliminate the high-frequency white noise and remove the low-frequency CME. Furthermore, the proposed method is more precise for denoising GNSS signals than the other denoising methods. For example, in the real-world example, our method reduces the mean standard deviation of the north, east and vertical components from 1.54 to 0.26, 1.64 to 0.21 and 4.80 to 0.72 mm, respectively. Noise analysis indicates that for the original signals, a combination of power-law plus white noise model can be identified as the best noise model. For the filtered time series using our method, the generalized Gauss-Markov model is the best noise model with the spectral indices close to - 3, indicating that flicker walk noise can be identified. Moreover, the common mode error in the unfiltered time series is significantly reduced by the proposed method. After filtering with our method, a combination of power-law plus white noise model is the best noise model for the CMEs in the study region.

  13. A nonlinear cointegration approach with applications to structural health monitoring

    NASA Astrophysics Data System (ADS)

    Shi, H.; Worden, K.; Cross, E. J.

    2016-09-01

    One major obstacle to the implementation of structural health monitoring (SHM) is the effect of operational and environmental variabilities, which may corrupt the signal of structural degradation. Recently, an approach inspired from the community of econometrics, called cointegration, has been employed to eliminate the adverse influence from operational and environmental changes and still maintain sensitivity to structural damage. However, the linear nature of cointegration may limit its application when confronting nonlinear relations between system responses. This paper proposes a nonlinear cointegration method based on Gaussian process regression (GPR); the method is constructed under the Engle-Granger framework, and tests for unit root processes are conducted both before and after the GPR is applied. The proposed approach is examined with real engineering data from the monitoring of the Z24 Bridge.

  14. Modal analysis of wave propagation in dispersive media

    NASA Astrophysics Data System (ADS)

    Abdelrahman, M. Ismail; Gralak, B.

    2018-01-01

    Surveys on wave propagation in dispersive media have been limited since the pioneering work of Sommerfeld [Ann. Phys. 349, 177 (1914), 10.1002/andp.19143491002] by the presence of branches in the integral expression of the wave function. In this article a method is proposed to eliminate these critical branches and hence to establish a modal expansion of the time-dependent wave function. The different components of the transient waves are physically interpreted as the contributions of distinct sets of modes and characterized accordingly. Then, the modal expansion is used to derive a modified analytical expression of the Sommerfeld precursor improving significantly the description of the amplitude and the oscillating period up to the arrival of the Brillouin precursor. The proposed method and results apply to all waves governed by the Helmholtz equations.

  15. Local facet approximation for image stitching

    NASA Astrophysics Data System (ADS)

    Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun

    2018-01-01

    Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.

  16. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  17. Laser optical method of visualizing cutaneous blood vessels and its applications in biometry and photomedicine

    NASA Astrophysics Data System (ADS)

    Asimov, M. M.; Asimov, R. M.; Rubinov, A. N.

    2011-05-01

    We propose and examine a new approach to visualizing a local network of cutaneous blood vessels using laser optical methods for applications in biometry and photomedicine. Various optical schemes of the formation of biometrical information on the architecture of blood vessels of skin tissue are analyzed. We developed an optical model of the interaction of the laser radiation with the biological tissue and a mathematical algorithm of processing of measurement results. We show that, in medicine, the visualization of blood vessels makes it possible to calculate and determine regions of disturbance of blood microcirculation and to control tissue hypoxia, as well as to maintain the local concentration of oxygen at a level necessary for the normal cellular metabolism. We propose noninvasive optical methods for modern photomedicine and biometry for diagnostics and elimination of tissue hypoxia and for personality identification and verification via the pattern of cutaneous blood vessels.

  18. Robust Foregrounds Removal for 21-cm Experiments

    NASA Astrophysics Data System (ADS)

    Mertens, F.; Ghosh, A.; Koopmans, L. V. E.

    2018-05-01

    Direct detection of the Epoch of Reionization via the redshifted 21-cm line will have unprecedented implications on the study of structure formation in the early Universe. To fulfill this promise current and future 21-cm experiments will need to detect the weak 21-cm signal over foregrounds several order of magnitude greater. This requires accurate modeling of the galactic and extragalactic emission and of its contaminants due to instrument chromaticity, ionosphere and imperfect calibration. To solve for this complex modeling, we propose a new method based on Gaussian Process Regression (GPR) which is able to cleanly separate the cosmological signal from most of the foregrounds contaminants. We also propose a new imaging method based on a maximum likelihood framework which solves for the interferometric equation directly on the sphere. Using this method, chromatic effects causing the so-called ``wedge'' are effectively eliminated (i.e. deconvolved) in the cylindrical (k⊥, k∥) power spectrum.

  19. An interference-based optical authentication scheme using two phase-only masks with different diffraction distances

    NASA Astrophysics Data System (ADS)

    Lu, Dajiang; He, Wenqi; Liao, Meihua; Peng, Xiang

    2017-02-01

    A new method to eliminate the security risk of the well-known interference-based optical cryptosystem is proposed. In this method, which is suitable for security authentication application, two phase-only masks are separately placed at different distances from the output plane, where a certification image (public image) can be obtained. To further increase the security and flexibility of this authentication system, we employ one more validation image (secret image), which can be observed at another output plane, for confirming the identity of the user. Only if the two correct masks are properly settled at their positions one could obtain two significant images. Besides, even if the legal users exchange their masks (keys), the authentication process will fail and the authentication results will not reveal any information. Numerical simulations are performed to demonstrate the validity and security of the proposed method.

  20. Fault Diagnosis of Rolling Bearing Based on Fast Nonlocal Means and Envelop Spectrum

    PubMed Central

    Lv, Yong; Zhu, Qinglin; Yuan, Rui

    2015-01-01

    The nonlocal means (NL-Means) method that has been widely used in the field of image processing in recent years effectively overcomes the limitations of the neighborhood filter and eliminates the artifact and edge problems caused by the traditional image denoising methods. Although NL-Means is very popular in the field of 2D image signal processing, it has not received enough attention in the field of 1D signal processing. This paper proposes a novel approach that diagnoses the fault of a rolling bearing based on fast NL-Means and the envelop spectrum. The parameters of the rolling bearing signals are optimized in the proposed method, which is the key contribution of this paper. This approach is applied to the fault diagnosis of rolling bearing, and the results have shown the efficiency at detecting roller bearing failures. PMID:25585105

  1. Isotropic stochastic rotation dynamics

    NASA Astrophysics Data System (ADS)

    Mühlbauer, Sebastian; Strobl, Severin; Pöschel, Thorsten

    2017-12-01

    Stochastic rotation dynamics (SRD) is a widely used method for the mesoscopic modeling of complex fluids, such as colloidal suspensions or multiphase flows. In this method, however, the underlying Cartesian grid defining the coarse-grained interaction volumes induces anisotropy. We propose an isotropic, lattice-free variant of stochastic rotation dynamics, termed iSRD. Instead of Cartesian grid cells, we employ randomly distributed spherical interaction volumes. This eliminates the requirement of a grid shift, which is essential in standard SRD to maintain Galilean invariance. We derive analytical expressions for the viscosity and the diffusion coefficient in relation to the model parameters, which show excellent agreement with the results obtained in iSRD simulations. The proposed algorithm is particularly suitable to model systems bound by walls of complex shape, where the domain cannot be meshed uniformly. The presented approach is not limited to SRD but is applicable to any other mesoscopic method, where particles interact within certain coarse-grained volumes.

  2. The use of fatigue tests in the manufacture of automotive steel wheels.

    NASA Astrophysics Data System (ADS)

    Drozyner, P.; Rychlik, A.

    2016-08-01

    Production for the automotive industry must be particularly sensitive to the aspect of safety and reliability of manufactured components. One of such element is the rim, where durability is a feature which significantly affects the safety of transport. Customer complaints regarding this element are particularly painful for the manufacturer because it is almost always associated with the event of accident or near-accident. Authors propose original comprehensive method of quality control at selected stages of rims production: supply of materials, production and pre-shipment inspections. Tests by the proposed method are carried out on the originally designed inertial fatigue machine The machine allows bending fatigue tests in the frequency range of 0 to 50 Hz at controlled increments of vibration amplitude. The method has been positively verified in one of rims factory in Poland. Implementation resulted in an almost complete elimination of complaints resulting from manufacturing and material errors.

  3. DSM-5: proposed changes to depressive disorders.

    PubMed

    Wakefield, Jerome C

    2012-03-01

    The Diagnostic and Statistical Manual of Mental Disorders (DSM) is currently undergoing a revision that will lead to a fifth edition in 2013. Proposed changes for DSM-5 include the creation of several new categories of depressive disorder. Some nosologists have expressed concern that the proposed changes could yield many 'false-positive diagnoses' in which normal distress is mislabeled as a mental disorder. Such confusion of normal distress and mental disorder undermines the interpretability of clinical trials and etiological research, causes inefficient allocation of resources, and incurs risks of unnecessary treatment. To evaluate these concerns, I critically examine five proposed DSM-5 expansions in the scope of depressive and grief disorders: (1) a new mixed anxiety/depression category; (2) a new premenstrual dysphoric disorder category; (3) elimination of the major depression bereavement exclusion; (4) elimination of the adjustment disorder bereavement exclusion, thus allowing the diagnosis of subsyndromal depressive symptoms during bereavement as adjustment disorders; and (5) a new category of adjustment disorder related to bereavement for diagnosing pathological non-depressive grief. I examine each proposal's face validity and conceptual coherence as well as empirical support where relevant, with special attention to potential implications for false-positive diagnoses. I conclude that mixed anxiety/depression and premenstrual dysphoric disorder are needed categories, but are too broadly drawn and will yield substantial false positives; that the elimination of the bereavement exclusion is not supported by the evidence; and that the proposed elimination of the adjustment-disorder bereavement exclusion, as well as the new category of grief-related adjustment disorder, are inconsistent with recent grief research, which suggests that these proposals would massively pathologize normal grief responses.

  4. A lane line segmentation algorithm based on adaptive threshold and connected domain theory

    NASA Astrophysics Data System (ADS)

    Feng, Hui; Xu, Guo-sheng; Han, Yi; Liu, Yang

    2018-04-01

    Before detecting cracks and repairs on road lanes, it's necessary to eliminate the influence of lane lines on the recognition result in road lane images. Aiming at the problems caused by lane lines, an image segmentation algorithm based on adaptive threshold and connected domain is proposed. First, by analyzing features like grey level distribution and the illumination of the images, the algorithm uses Hough transform to divide the images into different sections and convert them into binary images separately. It then uses the connected domain theory to amend the outcome of segmentation, remove noises and fill the interior zone of lane lines. Experiments have proved that this method could eliminate the influence of illumination and lane line abrasion, removing noises thoroughly while maintaining high segmentation precision.

  5. Phased array ghost elimination (PAGE) for segmented SSFP imaging with interrupted steady-state.

    PubMed

    Kellman, Peter; Guttman, Michael A; Herzka, Daniel A; McVeigh, Elliot R

    2002-12-01

    Steady-state free precession (SSFP) has recently proven to be valuable for cardiac imaging due to its high signal-to-noise ratio and blood-myocardium contrast. Data acquired using ECG-triggered, segmented sequences during the approach to steady-state, or return to steady-state after interruption, may have ghost artifacts due to periodic k-space distortion. Schemes involving several preparatory RF pulses have been proposed to restore steady-state, but these consume imaging time during early systole. Alternatively, the phased-array ghost elimination (PAGE) method may be used to remove ghost artifacts from the first several frames. PAGE was demonstrated for cardiac cine SSFP imaging with interrupted steady-state using a simple alpha/2 magnetization preparation and storage scheme and a spatial tagging preparation.

  6. Development of a Charge-Implicit ReaxFF Potential for Hydrocarbon Systems.

    PubMed

    Kański, Michał; Maciążek, Dawid; Postawa, Zbigniew; Ashraf, Chowdhury M; van Duin, Adri C T; Garrison, Barbara J

    2018-01-18

    Molecular dynamics (MD) simulations continue to make important contributions to understanding chemical and physical processes. Concomitant with the growth of MD simulations is the need to have interaction potentials that both represent the chemistry of the system and are computationally efficient. We propose a modification to the ReaxFF potential for carbon and hydrogen that eliminates the time-consuming charge equilibration, eliminates the acknowledged flaws of the electronegativity equalization method, includes an expanded training set for condensed phases, has a repulsive wall for simulations of energetic particle bombardment, and is compatible with the LAMMPS code. This charge-implicit ReaxFF potential is five times faster than the conventional ReaxFF potential for a simulation of keV particle bombardment with a sample size of over 800 000 atoms.

  7. Solution of system of multidimensional differential equations in X for identification of gold nanoparticles on fibers with elimination of uncertainty

    NASA Astrophysics Data System (ADS)

    Dobrovolskaya, T. A.; Emelyanov, V. M.; Emelyanov, V. V.

    2018-05-01

    There are the results of the compilation and solution of a system of multidimensional differential correlation equations of distribution ellipses in the identification of colloidal gold nanoparticles on polyester fibers with multi-dimensional correlation components of Raman polarization spectra. A proposed method is to increase the accuracy and speed of identification of silver nanoparticles on polyester fibers, taking into account the longitudinal and transverse polarization of laser radiation over the entire spectral range, analyzing in sequence and in order simultaneously two peaks along the X-transverse and along the Y-along the fibers. During a solution of the system using a nonlinear quadratic and differential equation with respect to X, an uncertainty arises, the elimination of which is numerical addition Δ = + 0.02985

  8. A Modified Empirical Wavelet Transform for Acoustic Emission Signal Decomposition in Structural Health Monitoring.

    PubMed

    Dong, Shaopeng; Yuan, Mei; Wang, Qiusheng; Liang, Zhiling

    2018-05-21

    The acoustic emission (AE) method is useful for structural health monitoring (SHM) of composite structures due to its high sensitivity and real-time capability. The main challenge, however, is how to classify the AE data into different failure mechanisms because the detected signals are affected by various factors. Empirical wavelet transform (EWT) is a solution for analyzing the multi-component signals and has been used to process the AE data. In order to solve the spectrum separation problem of the AE signals, this paper proposes a novel modified separation method based on local window maxima (LWM) algorithm. It searches the local maxima of the Fourier spectrum in a proper window, and automatically determines the boundaries of spectrum segmentations, which helps to eliminate the impact of noise interference or frequency dispersion in the detected signal and obtain the meaningful empirical modes that are more related to the damage characteristics. Additionally, both simulation signal and AE signal from the composite structures are used to verify the effectiveness of the proposed method. Finally, the experimental results indicate that the proposed method performs better than the original EWT method in identifying different damage mechanisms of composite structures.

  9. Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen

    2014-08-01

    Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.

  10. A Modified Empirical Wavelet Transform for Acoustic Emission Signal Decomposition in Structural Health Monitoring

    PubMed Central

    Dong, Shaopeng; Yuan, Mei; Wang, Qiusheng; Liang, Zhiling

    2018-01-01

    The acoustic emission (AE) method is useful for structural health monitoring (SHM) of composite structures due to its high sensitivity and real-time capability. The main challenge, however, is how to classify the AE data into different failure mechanisms because the detected signals are affected by various factors. Empirical wavelet transform (EWT) is a solution for analyzing the multi-component signals and has been used to process the AE data. In order to solve the spectrum separation problem of the AE signals, this paper proposes a novel modified separation method based on local window maxima (LWM) algorithm. It searches the local maxima of the Fourier spectrum in a proper window, and automatically determines the boundaries of spectrum segmentations, which helps to eliminate the impact of noise interference or frequency dispersion in the detected signal and obtain the meaningful empirical modes that are more related to the damage characteristics. Additionally, both simulation signal and AE signal from the composite structures are used to verify the effectiveness of the proposed method. Finally, the experimental results indicate that the proposed method performs better than the original EWT method in identifying different damage mechanisms of composite structures. PMID:29883411

  11. Electrohydrodynamic assisted droplet alignment for lens fabrication by droplet evaporation

    NASA Astrophysics Data System (ADS)

    Wang, Guangxu; Deng, Jia; Guo, Xing

    2018-04-01

    Lens fabrication by droplet evaporation has attracted a lot of attention since the fabrication approach is simple and moldless. Droplet position accuracy is a critical parameter in this approach, and thus it is of great importance to use accurate methods to realize the droplet position alignment. In this paper, we propose an electrohydrodynamic (EHD) assisted droplet alignment method. An electrostatic force was induced at the interface between materials to overcome the surface tension and gravity. The deviation of droplet position from the center region was eliminated and alignment was successfully realized. We demonstrated the capability of the proposed method theoretically and experimentally. First, we built a simulation model coupled with the three-phase flow formulations and the EHD equations to study the three-phase flowing process in an electric field. Results show that it is the uneven electric field distribution that leads to the relative movement of the droplet. Then, we conducted experiments to verify the method. Experimental results are consistent with the numerical simulation results. Moreover, we successfully fabricated a crater lens after applying the proposed method. A light emitting diode module packaging with the fabricated crater lens shows a significant light intensity distribution adjustment compared with a spherical cap lens.

  12. 76 FR 23349 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-26

    ... Proposed Rule Change To Eliminate the Expire Time April 20, 2011. Pursuant to Section 19(b)(1) of the... Quotes and Orders, to eliminate the ``Time in Force'' designation called ``Expire Time.'' This change is... eliminate the Expire Time. Currently, Chapter VI, Section 1(g) provides that the term ``Time in Force...

  13. 78 FR 73911 - Self-Regulatory Organizations; BOX Options Exchange LLC; Notice of Filing of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-09

    ... 3120 To Extend the Pilot Program That Eliminated the Position Limits for Options on SPDR S&P 500 ETF... extend the pilot program that eliminated the position limits for options on SPDR S&P 500 ETF (``SPY...- regulatory organizations (``SROs'') have adopted similar rules eliminating position limits on SPY and market...

  14. 77 FR 16307 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-20

    ... Change To Eliminate the 100MB Connectivity Option and Fee March 14, 2012. Pursuant to Section 19(b)(1) of... Exchange proposes to eliminate 100MB connectivity between the Exchange and co-located servers, as well as..., Section X(b) to eliminate 100MB connectivity between the Exchange and co-located servers, as well as...

  15. Global effects of accelerated tariff liberalization in the forest products sector to 2010.

    Treesearch

    Shushuai Zhu; Joseph Buongiorno; David J. Brooks

    2002-01-01

    This study projects the effects of tariff elimination on the world sector. Projections were done for two scenarios: (1) progressive tariff elimination according to the schedule agreed to under the current General Agreement on Tariff or Trade (GATT) and (2) complete elimination of tariff on wood products as proposed within the Asia-Pacific Economic Cooperation (APEC)...

  16. 77 FR 56895 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-14

    ... Transaction Fees To Eliminate the Step-Up Rate for Non-Floor Broker Transactions September 10, 2012. Pursuant... its Price List to eliminate the step-up rate for non-Floor broker transactions. The text of the... its Price List to eliminate the step-up rate for non-Floor broker transactions. The Exchange proposes...

  17. Marine environmental protection: An application of the nanometer photo catalyst method on decomposition of benzene.

    PubMed

    Lin, Mu-Chien; Kao, Jui-Chung

    2016-04-15

    Bioremediation is currently extensively employed in the elimination of coastal oil pollution, but it is not very effective as the process takes several months to degrade oil. Among the components of oil, benzene degradation is difficult due to its stable characteristics. This paper describes an experimental study on the decomposition of benzene by titanium dioxide (TiO2) nanometer photocatalysis. The photocatalyst is illuminated with 360-nm ultraviolet light for generation of peroxide ions. This results in complete decomposition of benzene, thus yielding CO2 and H2O. In this study, a nonwoven fabric is coated with the photocatalyst and benzene. Using the Double-Shot Py-GC system on the residual component, complete decomposition of the benzene was verified by 4h of exposure to ultraviolet light. The method proposed in this study can be directly applied to elimination of marine oil pollution. Further studies will be conducted on coastal oil pollution in situ. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Ruriha; Teramoto, Atsushi; Matsubara, Tomoko; Fujita, Hiroshi

    2013-03-01

    Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.

  19. A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.

    PubMed

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-03-24

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.

  20. A Planning Approach of Engineering Characteristics Based on QFD-TRIZ Integrated

    NASA Astrophysics Data System (ADS)

    Liu, Shang; Shi, Dongyan; Zhang, Ying

    Traditional QFD planning method compromises contradictions between engineering characteristics to achieve higher customer satisfaction. However, this compromise trade-off can not eliminate the contradictions existing among the engineering characteristics which limited the overall customer satisfaction. QFD (Quality function deployment) integrated with TRIZ (the Russian acronym of the Theory of Inventive Problem Solving) becomes hot research recently for TRIZ can be used to solve contradictions between engineering characteristics which construct the roof of HOQ (House of quality). But, the traditional QFD planning approach is not suitable for QFD integrated with TRIZ for that TRIZ requires emphasizing the contradictions between engineering characteristics at problem definition stage instead of compromising trade-off. So, a new planning approach based on QFD / TRIZ integration is proposed in this paper, which based on the consideration of the correlation matrix of engineering characteristics and customer satisfaction on the basis of cost. The proposed approach suggests that TRIZ should be applied to solve contradictions at the first step, and the correlation matrix of engineering characteristics should be amended at the second step, and at next step IFR (ideal final result) must be validated, then planning execute. An example is used to illustrate the proposed approach. The application indicated that higher customer satisfaction can be met and the contradictions between the characteristic parameters are eliminated.

  1. Enhancement of the Comb Filtering Selectivity Using Iterative Moving Average for Periodic Waveform and Harmonic Elimination

    PubMed Central

    Wu, Yan; Aarts, Ronald M.

    2018-01-01

    A recurring problem regarding the use of conventional comb filter approaches for elimination of periodic waveforms is the degree of selectivity achieved by the filtering process. Some applications, such as the gradient artefact correction in EEG recordings during coregistered EEG-fMRI, require a highly selective comb filtering that provides effective attenuation in the stopbands and gain close to unity in the pass-bands. In this paper, we present a novel comb filtering implementation whereby the iterative filtering application of FIR moving average-based approaches is exploited in order to enhance the comb filtering selectivity. Our results indicate that the proposed approach can be used to effectively approximate the FIR moving average filter characteristics to those of an ideal filter. A cascaded implementation using the proposed approach shows to further increase the attenuation in the filter stopbands. Moreover, broadening of the bandwidth of the comb filtering stopbands around −3 dB according to the fundamental frequency of the stopband can be achieved by the novel method, which constitutes an important characteristic to account for broadening of the harmonic gradient artefact spectral lines. In parallel, the proposed filtering implementation can also be used to design a novel notch filtering approach with enhanced selectivity as well. PMID:29599955

  2. Open-Source Photometric System for Enzymatic Nitrate Quantification

    PubMed Central

    Wittbrodt, B. T.; Squires, D. A.; Walbeck, J.; Campbell, E.; Campbell, W. H.; Pearce, J. M.

    2015-01-01

    Nitrate, the most oxidized form of nitrogen, is regulated to protect people and animals from harmful levels as there is a large over abundance due to anthropogenic factors. Widespread field testing for nitrate could begin to address the nitrate pollution problem, however, the Cadmium Reduction Method, the leading certified method to detect and quantify nitrate, demands the use of a toxic heavy metal. An alternative, the recently proposed Environmental Protection Agency Nitrate Reductase Nitrate-Nitrogen Analysis Method, eliminates this problem but requires an expensive proprietary spectrophotometer. The development of an inexpensive portable, handheld photometer will greatly expedite field nitrate analysis to combat pollution. To accomplish this goal, a methodology for the design, development, and technical validation of an improved open-source water testing platform capable of performing Nitrate Reductase Nitrate-Nitrogen Analysis Method. This approach is evaluated for its potential to i) eliminate the need for toxic chemicals in water testing for nitrate and nitrite, ii) reduce the cost of equipment to perform this method for measurement for water quality, and iii) make the method easier to carryout in the field. The device is able to perform as well as commercial proprietary systems for less than 15% of the cost for materials. This allows for greater access to the technology and the new, safer nitrate testing technique. PMID:26244342

  3. Open-Source Photometric System for Enzymatic Nitrate Quantification.

    PubMed

    Wittbrodt, B T; Squires, D A; Walbeck, J; Campbell, E; Campbell, W H; Pearce, J M

    2015-01-01

    Nitrate, the most oxidized form of nitrogen, is regulated to protect people and animals from harmful levels as there is a large over abundance due to anthropogenic factors. Widespread field testing for nitrate could begin to address the nitrate pollution problem, however, the Cadmium Reduction Method, the leading certified method to detect and quantify nitrate, demands the use of a toxic heavy metal. An alternative, the recently proposed Environmental Protection Agency Nitrate Reductase Nitrate-Nitrogen Analysis Method, eliminates this problem but requires an expensive proprietary spectrophotometer. The development of an inexpensive portable, handheld photometer will greatly expedite field nitrate analysis to combat pollution. To accomplish this goal, a methodology for the design, development, and technical validation of an improved open-source water testing platform capable of performing Nitrate Reductase Nitrate-Nitrogen Analysis Method. This approach is evaluated for its potential to i) eliminate the need for toxic chemicals in water testing for nitrate and nitrite, ii) reduce the cost of equipment to perform this method for measurement for water quality, and iii) make the method easier to carryout in the field. The device is able to perform as well as commercial proprietary systems for less than 15% of the cost for materials. This allows for greater access to the technology and the new, safer nitrate testing technique.

  4. Transport synthetic acceleration with opposing reflecting boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less

  5. Unbalanced voltage control of virtual synchronous generator in isolated micro-grid

    NASA Astrophysics Data System (ADS)

    Cao, Y. Z.; Wang, H. N.; Chen, B.

    2017-06-01

    Virtual synchronous generator (VSG) control is recommended to stabilize the voltage and frequency in isolated micro-grid. However, common VSG control is challenged by widely used unbalance loads, and the linked unbalance voltage problem worsens the power quality of the micro-grid. In this paper, the mathematical model of VSG was presented. Based on the analysis of positive- and negative-sequence equivalent circuit of VSG, an approach was proposed to eliminate the negative-sequence voltage of VSG with unbalance loads. Delay cancellation method and PI controller were utilized to identify and suppress the negative-sequence voltages. Simulation results verify the feasibility of proposed control strategy.

  6. Guiding properties of asymmetric hybrid plasmonic waveguides on dielectric substrates

    PubMed Central

    2014-01-01

    We proposed an asymmetric hybrid plasmonic waveguide which is placed on a substrate for practical applications by introducing an asymmetry into a symmetric hybrid plasmonic waveguide. The guiding properties of the asymmetric hybrid plasmonic waveguide are investigated using finite element method. The results show that, with proper waveguide sizes, the proposed waveguide can eliminate the influence of the substrate on its guiding properties and restore its broken symmetric mode. We obtained the maximum propagation length of 2.49 × 103 μm. It is approximately equal to that of the symmetric hybrid plasmonic waveguide embedded in air cladding with comparable nanoscale confinement. PMID:24406096

  7. EFFECTS OF LASER RADIATION ON MATTER. LASER PLASMA: Feasibility of investigation of optical breakdown statistics using multifrequency lasers

    NASA Astrophysics Data System (ADS)

    Ulanov, S. F.

    1990-06-01

    A method proposed for investigating the statistics of bulk optical breakdown relies on multifrequency lasers, which eliminates the influence of the laser radiation intensity statistics. The method is based on preliminary recording of the peak intensity statistics of multifrequency laser radiation pulses at the caustic using the optical breakdown threshold of K8 glass. The probability density distribution function was obtained at the focus for the peak intensities of the radiation pulses of a multifrequency laser. This method may be used to study the self-interaction under conditions of bulk optical breakdown of transparent dielectrics.

  8. Terrain mapping and control of unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kang, Yeonsik

    In this thesis, methods for terrain mapping and control of unmanned aerial vehicles (UAVs) are proposed. First, robust obstacle detection and tracking algorithm are introduced to eliminate the clutter noise uncorrelated with the real obstacle. This is an important problem since most types of sensor measurements are vulnerable to noise. In order to eliminate such noise, a Kalman filter-based interacting multiple model (IMM) algorithm is employed to effectively detect obstacles and estimate their positions precisely. Using the outcome of the IMM-based obstacle detection algorithm, a new method of building a probabilistic occupancy grid map is proposed based on Bayes rule in probability theory. Since the proposed map update law uses the outputs of the IMM-based obstacle detection algorithm, simultaneous tracking of moving targets and mapping of stationary obstacles are possible. This can be helpful especially in a noisy outdoor environment where different types of obstacles exist. Another feature of the algorithm is its capability to eliminate clutter noise as well as measurement noise. The proposed algorithm is simulated in Matlab using realistic sensor models. The results show close agreement with the layout of real obstacles. An efficient method called "quadtree" is used to process massive geographical information in a convenient manner. The algorithm is evaluated in a realistic simulation environment called RIPTIDE, which the NASA Ames Research Center developed to access the performance of complicated software for UAVs. Supposing that a UAV is equipped with abovementioned obstacle detection and mapping algorithm, the control problem of a small fixed-wing UAV is studied. A Nonlinear Model Predictive Control (NMPC is designed as a high level controller for the fixed-wing UAV using a kinematic model of the UAV. The kinematic model is employed because of the assumption that there exist low level controls on the UAV. The UAV dynamics are nonlinear with input constraints which is the main challenge explored in this thesis. The control objective of the NMPC is determined to track a desired line, and the analysis of the designed NMPC's stability is followed to find the conditions that can assure stability. Then, the control objective is extended to track adjoined multiple line segments with obstacle avoidance capability. In simulation, the performance of the NMPC is superb with fast convergence and small overshoot. The computation time is not a burden for a fixed-wing UAV controller with a Pentium level on-board computer that provides a reasonable control update rate.

  9. Shadow Areas Robust Matching Among Image Sequence in Planetary Landing

    NASA Astrophysics Data System (ADS)

    Ruoyan, Wei; Xiaogang, Ruan; Naigong, Yu; Xiaoqing, Zhu; Jia, Lin

    2017-01-01

    In this paper, an approach for robust matching shadow areas in autonomous visual navigation and planetary landing is proposed. The approach begins with detecting shadow areas, which are extracted by Maximally Stable Extremal Regions (MSER). Then, an affine normalization algorithm is applied to normalize the areas. Thirdly, a descriptor called Multiple Angles-SIFT (MA-SIFT) that coming from SIFT is proposed, the descriptor can extract more features of an area. Finally, for eliminating the influence of outliers, a method of improved RANSAC based on Skinner Operation Condition is proposed to extract inliers. At last, series of experiments are conducted to test the performance of the approach this paper proposed, the results show that the approach can maintain the matching accuracy at a high level even the differences among the images are obvious with no attitude measurements supplied.

  10. Rapid optimization method of the strong stray light elimination for extremely weak light signal detection.

    PubMed

    Wang, Geng; Xing, Fei; Wei, Minsong; You, Zheng

    2017-10-16

    The strong stray light has huge interference on the detection of weak and small optical signals, and is difficult to suppress. In this paper, a miniaturized baffle with angled vanes was proposed and a rapid optimization model of strong light elimination was built, which has better suppression of the stray lights than the conventional vanes and can optimize the positions of the vanes efficiently and accurately. Furthermore, the light energy distribution model was built based on the light projection at a specific angle, and the light propagation models of the vanes and sidewalls were built based on the Lambert scattering, both of which act as the bias of a calculation method of stray light. Moreover, the Monte-Carlo method was employed to realize the Point Source Transmittance (PST) simulation, and the simulation result indicated that it was consistent with the calculation result based on our models, and the PST could be improved by 2-3 times at the small incident angles for the baffle designed by the new method. Meanwhile, the simulation result was verified by laboratory tests, and the new model with derived analytical expressions which can reduce the simulation time significantly.

  11. Machine Learning Toolkit for Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-03-31

    Support Vector Machines (SVM) is a popular machine learning technique, which has been applied to a wide range of domains such as science, finance, and social networks for supervised learning. MaTEx undertakes the challenge of designing a scalable parallel SVM training algorithm for large scale systems, which includes commodity multi-core machines, tightly connected supercomputers and cloud computing systems. Several techniques are proposed for improved speed and memory space usage including adaptive and aggressive elimination of samples for faster convergence , and sparse format representation of data samples. Several heuristics for earliest possible to lazy elimination of non-contributing samples are consideredmore » in MaTEx. In many cases, where an early sample elimination might result in a false positive, low overhead mechanisms for reconstruction of key data structures are proposed. The proposed algorithm and heuristics are implemented and evaluated on various publicly available datasets« less

  12. 75 FR 10542 - Self-Regulatory Organizations; National Securities Clearing Corporation; Order Approving Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-08

    ... Eliminate Guarantee of Payment in Connection With the Envelope Settlement Service March 1, 2010. I... being eliminated. The change to Addendum K is to delete the provision whereby NSCC provided a guarantee...

  13. Simple automatic strategy for background drift correction in chromatographic data analysis.

    PubMed

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-03

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.

    PubMed

    Liu, Jing; Zhou, Weidong; Juwono, Filbert H

    2017-05-08

    Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.

  15. Analytical Eco-Scale for Assessing the Greenness of a Developed RP-HPLC Method Used for Simultaneous Analysis of Combined Antihypertensive Medications.

    PubMed

    Mohamed, Heba M; Lamie, Nesrine T

    2016-09-01

    In the past few decades the analytical community has been focused on eliminating or reducing the usage of hazardous chemicals and solvents, in different analytical methodologies, that have been ascertained to be extremely dangerous to human health and environment. In this context, environmentally friendly, green, or clean practices have been implemented in different research areas. This study presents a greener alternative of conventional RP-HPLC methods for the simultaneous determination and quantitative analysis of a pharmaceutical ternary mixture composed of telmisartan, hydrochlorothiazide, and amlodipine besylate, using an ecofriendly mobile phase and short run time with the least amount of waste production. This solvent-replacement approach was feasible without compromising method performance criteria, such as separation efficiency, peak symmetry, and chromatographic retention. The greenness profile of the proposed method was assessed and compared with reported conventional methods using the analytical Eco-Scale as an assessment tool. The proposed method was found to be greener in terms of usage of hazardous chemicals and solvents, energy consumption, and production of waste. The proposed method can be safely used for the routine analysis of the studied pharmaceutical ternary mixture with a minimal detrimental impact on human health and the environment.

  16. Numerical Simulations of Hypersonic Boundary Layer Transition

    NASA Astrophysics Data System (ADS)

    Bartkowicz, Matthew David

    Numerical schemes for supersonic flows tend to use large amounts of artificial viscosity for stability. This tends to damp out the small scale structures in the flow. Recently some low-dissipation methods have been proposed which selectively eliminate the artificial viscosity in regions which do not require it. This work builds upon the low-dissipation method of Subbareddy and Candler which uses the flux vector splitting method of Steger and Warming but identifies the dissipation portion to eliminate it. Computing accurate fluxes typically relies on large grid stencils or coupled linear systems that become computationally expensive to solve. Unstructured grids allow for CFD solutions to be obtained on complex geometries, unfortunately, it then becomes difficult to create a large stencil or the coupled linear system. Accurate solutions require grids that quickly become too large to be feasible. In this thesis a method is proposed to obtain more accurate solutions using relatively local data, making it suitable for unstructured grids composed of hexahedral elements. Fluxes are reconstructed using local gradients to extend the range of data used. The method is then validated on several test problems. Simulations of boundary layer transition are then performed. An elliptic cone at Mach 8 is simulated based on an experiment at the Princeton Gasdynamics Laboratory. A simulated acoustic noise boundary condition is imposed to model the noisy conditions of the wind tunnel and the transitioning boundary layer observed. A computation of an isolated roughness element is done based on an experiment in Purdue's Mach 6 quiet wind tunnel. The mechanism for transition is identified as an instability in the upstream separation region and a comparison is made to experimental data. In the CFD a fully turbulent boundary layer is observed downstream.

  17. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    PubMed

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  18. Real-time evaluation of polyphenol oxidase (PPO) activity in lychee pericarp based on weighted combination of spectral data and image features as determined by fuzzy neural network.

    PubMed

    Yang, Yi-Chao; Sun, Da-Wen; Wang, Nan-Nan; Xie, Anguo

    2015-07-01

    A novel method of using hyperspectral imaging technique with the weighted combination of spectral data and image features by fuzzy neural network (FNN) was proposed for real-time prediction of polyphenol oxidase (PPO) activity in lychee pericarp. Lychee images were obtained by a hyperspectral reflectance imaging system operating in the range of 400-1000nm. A support vector machine-recursive feature elimination (SVM-RFE) algorithm was applied to eliminating variables with no or little information for the prediction from all bands, resulting in a reduced set of optimal wavelengths. Spectral information at the optimal wavelengths and image color features were then used respectively to develop calibration models for the prediction of PPO in pericarp during storage, and the results of two models were compared. In order to improve the prediction accuracy, a decision strategy was developed based on weighted combination of spectral data and image features, in which the weights were determined by FNN for a better estimation of PPO activity. The results showed that the combined decision model was the best among all of the calibration models, with high R(2) values of 0.9117 and 0.9072 and low RMSEs of 0.45% and 0.459% for calibration and prediction, respectively. These results demonstrate that the proposed weighted combined decision method has great potential for improving model performance. The proposed technique could be used for a better prediction of other internal and external quality attributes of fruits. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Magnetic nanoparticle thermometry independent of Brownian relaxation

    NASA Astrophysics Data System (ADS)

    Zhong, Jing; Schilling, Meinhard; Ludwig, Frank

    2018-01-01

    An improved method of magnetic nanoparticle (MNP) thermometry is proposed. The phase lag ϕ of the fundamental f 0 harmonic is measured to eliminate the influence of Brownian relaxation on the ratio of 3f 0 to f 0 harmonic amplitudes applying a phenomenological model, thus allowing measurements in high-frequency ac magnetic fields. The model is verified by simulations of the Fokker-Planck equation. An MNP spectrometer is calibrated for the measurements of the phase lag ϕ and the amplitudes of 3f 0 and f 0 harmonics. Calibration curves of the harmonic ratio and tanϕ are measured by varying the frequency (from 10 Hz to 1840 Hz) of ac magnetic fields with different amplitudes (from 3.60 mT to 4.00 mT) at a known temperature. A phenomenological model is employed to fit the calibration curves. Afterwards, the improved method is proposed to iteratively compensate the measured harmonic ratio with tanϕ, and consequently calculate temperature applying the static Langevin function. Experimental results on SHP-25 MNPs show that the proposed method significantly improves the systematic error to 2 K at maximum with a relative accuracy of about 0.63%. This demonstrates the feasibility of the proposed method for MNP thermometry with SHP-25 MNPs even if the MNP signal is affected by Brownian relaxation.

  20. Parallelized Stochastic Cutoff Method for Long-Range Interacting Systems

    NASA Astrophysics Data System (ADS)

    Endo, Eishin; Toga, Yuta; Sasaki, Munetaka

    2015-07-01

    We present a method of parallelizing the stochastic cutoff (SCO) method, which is a Monte-Carlo method for long-range interacting systems. After interactions are eliminated by the SCO method, we subdivide a lattice into noninteracting interpenetrating sublattices. This subdivision enables us to parallelize the Monte-Carlo calculation in the SCO method. Such subdivision is found by numerically solving the vertex coloring of a graph created by the SCO method. We use an algorithm proposed by Kuhn and Wattenhofer to solve the vertex coloring by parallel computation. This method was applied to a two-dimensional magnetic dipolar system on an L × L square lattice to examine its parallelization efficiency. The result showed that, in the case of L = 2304, the speed of computation increased about 102 times by parallel computation with 288 processors.

  1. Data-driven adaptive fractional order PI control for PMSM servo system with measurement noise and data dropouts.

    PubMed

    Xie, Yuanlong; Tang, Xiaoqi; Song, Bao; Zhou, Xiangdong; Guo, Yixuan

    2018-04-01

    In this paper, data-driven adaptive fractional order proportional integral (AFOPI) control is presented for permanent magnet synchronous motor (PMSM) servo system perturbed by measurement noise and data dropouts. The proposed method directly exploits the closed-loop process data for the AFOPI controller design under unknown noise distribution and data missing probability. Firstly, the proposed method constructs the AFOPI controller tuning problem as a parameter identification problem using the modified l p norm virtual reference feedback tuning (VRFT). Then, iteratively reweighted least squares is integrated into the l p norm VRFT to give a consistent compensation solution for the AFOPI controller. The measurement noise and data dropouts are estimated and eliminated by feedback compensation periodically, so that the AFOPI controller is updated online to accommodate the time-varying operating conditions. Moreover, the convergence and stability are guaranteed by mathematical analysis. Finally, the effectiveness of the proposed method is demonstrated both on simulations and experiments implemented on a practical PMSM servo system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Viscoacoustic anisotropic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Qu, Yingming; Li, Zhenchun; Huang, Jianping; Li, Jinli

    2017-01-01

    A viscoacoustic vertical transverse isotropic (VTI) quasi-differential wave equation, which takes account for both the viscosity and anisotropy of media, is proposed for wavefield simulation in this study. The finite difference method is used to solve the equations, for which the attenuation terms are solved in the wavenumber domain, and all remaining terms in the time-space domain. To stabilize the adjoint wavefield, robust regularization operators are applied to the wave equation to eliminate the high-frequency component of the numerical noise produced during the backward propagation of the viscoacoustic wavefield. Based on these strategies, we derive the corresponding gradient formula and implement a viscoacoustic VTI full waveform inversion (FWI). Numerical tests verify that our proposed viscoacoustic VTI FWI can produce accurate and stable inversion results for viscoacoustic VTI data sets. In addition, we test our method's sensitivity to velocity, Q, and anisotropic parameters. Our results show that the sensitivity to velocity is much higher than that to Q and anisotropic parameters. As such, our proposed method can produce acceptable inversion results as long as the Q and anisotropic parameters are within predefined thresholds.

  3. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    PubMed

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  4. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method

    PubMed Central

    Deng, Xinyang

    2017-01-01

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model. PMID:28895905

  5. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    NASA Astrophysics Data System (ADS)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  6. Hot-spot selection and evaluation methods for whole slice images of meningiomas and oligodendrogliomas.

    PubMed

    Swiderska, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Slodkowska, Janina

    2015-01-01

    The paper presents a combined method for an automatic hot-spot areas selection based on penalty factor in the whole slide images to support the pathomorphological diagnostic procedure. The studied slides represent the meningiomas and oligodendrogliomas tumor on the basis of the Ki-67/MIB-1 immunohistochemical reaction. It allows determining the tumor proliferation index as well as gives an indication to the medical treatment and prognosis. The combined method based on mathematical morphology, thresholding, texture analysis and classification is proposed and verified. The presented algorithm includes building a specimen map, elimination of hemorrhages from them, two methods for detection of hot-spot fields with respect to an introduced penalty factor. Furthermore, we propose localization concordance measure to evaluation localization of hot spot selection by the algorithms in respect to the expert's results. Thus, the results of the influence of the penalty factor are presented and discussed. It was found that the best results are obtained for 0.2 value of them. They confirm effectiveness of applied approach.

  7. Method of gear fault diagnosis based on EEMD and improved Elman neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Zhao, Wei; Xiao, Shungen; Song, Mengmeng

    2017-05-01

    Aiming at crack and wear and so on of gears Fault information is difficult to diagnose usually due to its weak, a gear fault diagnosis method that is based on EEMD and improved Elman neural network fusion is proposed. A number of IMF components are obtained by decomposing denoised all kinds of fault signals with EEMD, and the pseudo IMF components is eliminated by using the correlation coefficient method to obtain the effective IMF component. The energy characteristic value of each effective component is calculated as the input feature quantity of Elman neural network, and the improved Elman neural network is based on standard network by adding a feedback factor. The fault data of normal gear, broken teeth, cracked gear and attrited gear were collected by field collecting. The results were analyzed by the diagnostic method proposed in this paper. The results show that compared with the standard Elman neural network, Improved Elman neural network has the advantages of high diagnostic efficiency.

  8. Curvelet-domain multiple matching method combined with cubic B-spline function

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  9. Method to suppress DDFS spurious signals in a frequency-hopping synthesizer with DDFS-driven PLL architecture.

    PubMed

    Kwon, Kun-Sup; Yoon, Won-Sang

    2010-01-01

    In this paper we propose a method of removing from synthesizer output spurious signals due to quasi-amplitude modulation and superposition effect in a frequency-hopping synthesizer with direct digital frequency synthesizer (DDFS)-driven phase-locked loop (PLL) architecture, which has the advantages of high frequency resolution, fast transition time, and small size. There are spurious signals that depend on normalized frequency of DDFS. They can be dominant if they occur within the PLL loop bandwidth. We suggest that such signals can be eliminated by purposefully creating frequency errors in the developed synthesizer.

  10. Laser-induced volatilization and ionization of microparticles

    NASA Technical Reports Server (NTRS)

    Sinha, M. P.

    1984-01-01

    A method for the laser vaporization and ionization of individual micron-size particles is presented whereby a particle is ionized by a laser pulse while in flight in the beam. Ionization in the beam offers a real-time analytical capability and eliminates any possible substrate-sample interferences during an analysis. An experimental arrangement using a high-energy Nd-YAG laser is described, and results are presented for ions generated from potassium biphthalate particles (1.96 micron in diameter). The method proposed here is useful for the chemical analysis of aerosol particles by mass spectrometry and for other spectroscopic and chemical kinetic studies.

  11. Note: Work function change measurement via improved Anderson method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabik, A., E-mail: sabik@ifd.uni.wroc.pl; Gołek, F.; Antczak, G.

    We propose the modification to the Anderson method of work function change (Δϕ) measurements. In this technique, the kinetic energy of the probing electrons is already low enough for non-destructive investigation of delicate molecular systems. However, in our implementation, all electrodes including filament of the electron gun are polarized positively. As a consequence, electron bombardment of any elements of experimental system is eliminated. Our modification improves cleanliness of the ultra-high vacuum system. As an illustration of the solution capabilities, we present Δϕ of the Ag(100) surface induced by cobalt phthalocyanine layers.

  12. Privacy-preserving restricted boltzmann machine.

    PubMed

    Li, Yu; Zhang, Yuan; Ji, Yue

    2014-01-01

    With the arrival of the big data era, it is predicted that distributed data mining will lead to an information technology revolution. To motivate different institutes to collaborate with each other, the crucial issue is to eliminate their concerns regarding data privacy. In this paper, we propose a privacy-preserving method for training a restricted boltzmann machine (RBM). The RBM can be got without revealing their private data to each other when using our privacy-preserving method. We provide a correctness and efficiency analysis of our algorithms. The comparative experiment shows that the accuracy is very close to the original RBM model.

  13. Evaluation of the morphology structure of meibomian glands based on mask dodging method

    NASA Astrophysics Data System (ADS)

    Yan, Huangping; Zuo, Yingbo; Chen, Yisha; Chen, Yanping

    2016-10-01

    Low contrast and non-uniform illumination of infrared (IR) meibography images make the detection of meibomian glands challengeable. An improved Mask dodging algorithm is proposed. To overcome the shortage of low contrast using traditional Mask dodging method, a scale factor is used to enhance the image after subtracting background image from an original one. Meibomian glands are detected and the ratio of the meibomian gland area to the measurement area is calculated. The results show that the improved Mask algorithm has ideal dodging effect, which can eliminate non-uniform illumination and improve contrast of meibography images effectively.

  14. Terpenes as green solvents for extraction of oil from microalgae.

    PubMed

    Dejoye Tanzi, Celine; Abert Vian, Maryline; Ginies, Christian; Elmaataoui, Mohamed; Chemat, Farid

    2012-07-09

    Herein is described a green and original alternative procedure for the extraction of oil from microalgae. Extractions were carried out using terpenes obtained from renewable feedstocks as alternative solvents instead of hazardous petroleum solvents such as n-hexane. The described method is achieved in two steps using Soxhlet extraction followed by the elimination of the solvent from the medium using Clevenger distillation in the second step. Oils extracted from microalgae were compared in terms of qualitative and quantitative determination. No significant difference was obtained between each extract, allowing us to conclude that the proposed method is green, clean and efficient.

  15. Privacy-Preserving Restricted Boltzmann Machine

    PubMed Central

    Li, Yu

    2014-01-01

    With the arrival of the big data era, it is predicted that distributed data mining will lead to an information technology revolution. To motivate different institutes to collaborate with each other, the crucial issue is to eliminate their concerns regarding data privacy. In this paper, we propose a privacy-preserving method for training a restricted boltzmann machine (RBM). The RBM can be got without revealing their private data to each other when using our privacy-preserving method. We provide a correctness and efficiency analysis of our algorithms. The comparative experiment shows that the accuracy is very close to the original RBM model. PMID:25101139

  16. Three-dimensional morphological analysis of intracranial aneurysms: a fully automated method for aneurysm sac isolation and quantification.

    PubMed

    Larrabide, Ignacio; Cruz Villa-Uriol, Maria; Cárdenes, Rubén; Pozo, Jose Maria; Macho, Juan; San Roman, Luis; Blasco, Jordi; Vivas, Elio; Marzo, Alberto; Hose, D Rod; Frangi, Alejandro F

    2011-05-01

    Morphological descriptors are practical and essential biomarkers for diagnosis and treatment selection for intracranial aneurysm management according to the current guidelines in use. Nevertheless, relatively little work has been dedicated to improve the three-dimensional quantification of aneurysmal morphology, to automate the analysis, and hence to reduce the inherent intra and interobserver variability of manual analysis. In this paper we propose a methodology for the automated isolation and morphological quantification of saccular intracranial aneurysms based on a 3D representation of the vascular anatomy. This methodology is based on the analysis of the vasculature skeleton's topology and the subsequent application of concepts from deformable cylinders. These are expanded inside the parent vessel to identify different regions and discriminate the aneurysm sac from the parent vessel wall. The method renders as output the surface representation of the isolated aneurysm sac, which can then be quantified automatically. The proposed method provides the means for identifying the aneurysm neck in a deterministic way. The results obtained by the method were assessed in two ways: they were compared to manual measurements obtained by three independent clinicians as normally done during diagnosis and to automated measurements from manually isolated aneurysms by three independent operators, nonclinicians, experts in vascular image analysis. All the measurements were obtained using in-house tools. The results were qualitatively and quantitatively compared for a set of the saccular intracranial aneurysms (n = 26). Measurements performed on a synthetic phantom showed that the automated measurements obtained from manually isolated aneurysms where the most accurate. The differences between the measurements obtained by the clinicians and the manually isolated sacs were statistically significant (neck width: p <0.001, sac height: p = 0.002). When comparing clinicians' measurements to automatically isolated sacs, only the differences for the neck width were significant (neck width: p <0.001, sac height: p = 0.95). However, the correlation and agreement between the measurements obtained from manually and automatically isolated aneurysms for the neck width: p = 0.43 and sac height: p = 0.95 where found. The proposed method allows the automated isolation of intracranial aneurysms, eliminating the interobserver variability. In average, the computational cost of the automated method (2 min 36 s) was similar to the time required by a manual operator (measurement by clinicians: 2 min 51 s, manual isolation: 2 min 21 s) but eliminating human interaction. The automated measurements are irrespective of the viewing angle, eliminating any bias or difference between the observer criteria. Finally, the qualitative assessment of the results showed acceptable agreement between manually and automatically isolated aneurysms.

  17. Non-input analysis for incomplete trapping irreversible tracer with PET.

    PubMed

    Ohya, Tomoyuki; Kikuchi, Tatsuya; Fukumura, Toshimitsu; Zhang, Ming-Rong; Irie, Toshiaki

    2013-07-01

    When using metabolic trapping type tracers, the tracers are not always trapped in the target tissue; i.e., some are completely trapped in the target, but others can be eliminated from the target tissue at a measurable rate. The tracers that can be eliminated are termed 'incomplete trapping irreversible tracers'. These incomplete trapping irreversible tracers may be clinically useful when the tracer β-value, the ratio of the tracer (metabolite) elimination rate to the tracer efflux rate, is under approximately 0.1. In this study, we propose a non-input analysis for incomplete trapping irreversible tracers based on the shape analysis (Shape), a non-input analysis used for irreversible tracers. A Monte Carlo simulation study based on experimental monkey data with two actual PET tracers (a complete trapping irreversible tracer [(11)C]MP4A and an incomplete trapping irreversible tracer [(18)F]FEP-4MA) was performed to examine the effects of the environmental error and the tracer elimination rate on the estimation of the k3-parameter (corresponds to metabolic rate) using Shape (original) and modified Shape (M-Shape) analysis. The simulation results were also compared with the experimental results obtained with the two PET tracers. When the tracer β-value was over 0.03, the M-Shape method was superior to the Shape method for the estimation of the k3-parameter. The simulation results were also in reasonable agreement with the experimental ones. M-Shape can be used as the non-input analysis of incomplete trapping irreversible tracers for PET study. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Polarization Smoothing Generalized MUSIC Algorithm with Polarization Sensitive Array for Low Angle Estimation.

    PubMed

    Tan, Jun; Nie, Zaiping

    2018-05-12

    Direction of Arrival (DOA) estimation of low-altitude targets is difficult due to the multipath coherent interference from the ground reflection image of the targets, especially for very high frequency (VHF) radars, which have antennae that are severely restricted in terms of aperture and height. The polarization smoothing generalized multiple signal classification (MUSIC) algorithm, which combines polarization smoothing and generalized MUSIC algorithm for polarization sensitive arrays (PSAs), was proposed to solve this problem in this paper. Firstly, the polarization smoothing pre-processing was exploited to eliminate the coherence between the direct and the specular signals. Secondly, we constructed the generalized MUSIC algorithm for low angle estimation. Finally, based on the geometry information of the symmetry multipath model, the proposed algorithm was introduced to convert the two-dimensional searching into one-dimensional searching, thus reducing the computational burden. Numerical results were provided to verify the effectiveness of the proposed method, showing that the proposed algorithm has significantly improved angle estimation performance in the low-angle area compared with the available methods, especially when the grazing angle is near zero.

  19. Automated segmentation and isolation of touching cell nuclei in cytopathology smear images of pleural effusion using distance transform watershed method

    NASA Astrophysics Data System (ADS)

    Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko

    2017-06-01

    The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.

  20. A Novel Complex-Coefficient In-Band Interference Suppression Algorithm for Cognitive Ultra-Wide Band Wireless Sensors Networks.

    PubMed

    Xiong, Hailiang; Zhang, Wensheng; Xu, Hongji; Du, Zhengfeng; Tang, Huaibin; Li, Jing

    2017-05-25

    With the rapid development of wireless communication systems and electronic techniques, the limited frequency spectrum resources are shared with various wireless devices, leading to a crowded and challenging coexistence circumstance. Cognitive radio (CR) and ultra-wide band (UWB), as sophisticated wireless techniques, have been considered as significant solutions to solve the harmonious coexistence issues. UWB wireless sensors can share the spectrum with primary user (PU) systems without harmful interference. The in-band interference of UWB systems should be considered because such interference can severely affect the transmissions of UWB wireless systems. In order to solve the in-band interference issues for UWB wireless sensor networks (WSN), a novel in-band narrow band interferences (NBIs) elimination scheme is proposed in this paper. The proposed narrow band interferences suppression scheme is based on a novel complex-coefficient adaptive notch filter unit with a single constrained zero-pole pair. Moreover, in order to reduce the computation complexity of the proposed scheme, an adaptive complex-coefficient iterative method based on two-order Taylor series is designed. To cope with multiple narrow band interferences, a linear cascaded high order adaptive filter and a cyclic cascaded high order matrix adaptive filter (CCHOMAF) interference suppression algorithm based on the basic adaptive notch filter unit are also presented. The theoretical analysis and numerical simulation results indicate that the proposed CCHOMAF algorithm can achieve better performance in terms of average bit error rate for UWB WSNs. The proposed in-band NBIs elimination scheme can significantly improve the reception performance of low-cost and low-power UWB wireless systems.

  1. A Novel Complex-Coefficient In-Band Interference Suppression Algorithm for Cognitive Ultra-Wide Band Wireless Sensors Networks

    PubMed Central

    Xiong, Hailiang; Zhang, Wensheng; Xu, Hongji; Du, Zhengfeng; Tang, Huaibin; Li, Jing

    2017-01-01

    With the rapid development of wireless communication systems and electronic techniques, the limited frequency spectrum resources are shared with various wireless devices, leading to a crowded and challenging coexistence circumstance. Cognitive radio (CR) and ultra-wide band (UWB), as sophisticated wireless techniques, have been considered as significant solutions to solve the harmonious coexistence issues. UWB wireless sensors can share the spectrum with primary user (PU) systems without harmful interference. The in-band interference of UWB systems should be considered because such interference can severely affect the transmissions of UWB wireless systems. In order to solve the in-band interference issues for UWB wireless sensor networks (WSN), a novel in-band narrow band interferences (NBIs) elimination scheme is proposed in this paper. The proposed narrow band interferences suppression scheme is based on a novel complex-coefficient adaptive notch filter unit with a single constrained zero-pole pair. Moreover, in order to reduce the computation complexity of the proposed scheme, an adaptive complex-coefficient iterative method based on two-order Taylor series is designed. To cope with multiple narrow band interferences, a linear cascaded high order adaptive filter and a cyclic cascaded high order matrix adaptive filter (CCHOMAF) interference suppression algorithm based on the basic adaptive notch filter unit are also presented. The theoretical analysis and numerical simulation results indicate that the proposed CCHOMAF algorithm can achieve better performance in terms of average bit error rate for UWB WSNs. The proposed in-band NBIs elimination scheme can significantly improve the reception performance of low-cost and low-power UWB wireless systems. PMID:28587085

  2. A possibility of avoiding surface roughness due to insects

    NASA Technical Reports Server (NTRS)

    Wortmann, F. X.

    1984-01-01

    Discussion of a method for eliminating turbulence caused by the formation of insect roughness upon the leading edges and fuselage, particularly in aircraft using BLC. The proposed technique foresees the use of elastic surfaces on which insect roughness cannot form. The operational characteristics of highly elastic rubber surface fastened to the wing leading edges and fuselage edges are examined. Some preliminary test results are presented. The technique is seen to be advantageous primarily for short-haul operations.

  3. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  4. A new method for spatial structure detection of complex inner cavities based on 3D γ-photon imaging

    NASA Astrophysics Data System (ADS)

    Xiao, Hui; Zhao, Min; Liu, Jiantang; Liu, Jiao; Chen, Hao

    2018-05-01

    This paper presents a new three-dimensional (3D) imaging method for detecting the spatial structure of a complex inner cavity based on positron annihilation and γ-photon detection. This method first marks carrier solution by a certain radionuclide and injects it into the inner cavity where positrons are generated. Subsequently, γ-photons are released from positron annihilation, and the γ-photon detector ring is used for recording the γ-photons. Finally, the two-dimensional (2D) image slices of the inner cavity are constructed by the ordered-subset expectation maximization scheme and the 2D image slices are merged to the 3D image of the inner cavity. To eliminate the artifact in the reconstructed image due to the scattered γ-photons, a novel angle-traversal model is proposed for γ-photon single-scattering correction, in which the path of the single scattered γ-photon is analyzed from a spatial geometry perspective. Two experiments are conducted to verify the effectiveness of the proposed correction model and the advantage of the proposed testing method in detecting the spatial structure of the inner cavity, including the distribution of gas-liquid multi-phase mixture inside the inner cavity. The above two experiments indicate the potential of the proposed method as a new tool for accurately delineating the inner structures of industrial complex parts.

  5. Tracking Multiple Video Targets with an Improved GM-PHD Tracker

    PubMed Central

    Zhou, Xiaolong; Yu, Hui; Liu, Honghai; Li, Youfu

    2015-01-01

    Tracking multiple moving targets from a video plays an important role in many vision-based robotic applications. In this paper, we propose an improved Gaussian mixture probability hypothesis density (GM-PHD) tracker with weight penalization to effectively and accurately track multiple moving targets from a video. First, an entropy-based birth intensity estimation method is incorporated to eliminate the false positives caused by noisy video data. Then, a weight-penalized method with multi-feature fusion is proposed to accurately track the targets in close movement. For targets without occlusion, a weight matrix that contains all updated weights between the predicted target states and the measurements is constructed, and a simple, but effective method based on total weight and predicted target state is proposed to search the ambiguous weights in the weight matrix. The ambiguous weights are then penalized according to the fused target features that include spatial-colour appearance, histogram of oriented gradient and target area and further re-normalized to form a new weight matrix. With this new weight matrix, the tracker can correctly track the targets in close movement without occlusion. For targets with occlusion, a robust game-theoretical method is used. Finally, the experiments conducted on various video scenarios validate the effectiveness of the proposed penalization method and show the superior performance of our tracker over the state of the art. PMID:26633422

  6. Reversed Phase Column HPLC-ICP-MS Conditions for Arsenic Speciation Analysis of Rice Flour.

    PubMed

    Narukawa, Tomohiro; Matsumoto, Eri; Nishimura, Tsutomu; Hioki, Akiharu

    2015-01-01

    New measurement conditions for arsenic speciation analysis of rice flour were developed using HPLC-ICP-MS equipped with a reversed phase ODS column. Eight arsenic species, namely, arsenite [As(III)], arsenate [As(V)], monomethylarsonic acid (MMAA), dimethylarsinic acid (DMAA), trimethylarsine oxide (TMAO), tetramethylarsonium (TeMA), arsenobetaine (AsB) and arsenocholine (AsC), were separated and determined under the proposed conditions. In particular, As(III) and MMAA and DMAA and AsB were completely separated using a newly proposed eluent containing ammonium dihydrogen phosphate. Importantly, the sensitivity changes, in particular those of As(V) and As(III) caused by coexisting elements and by complex matrix composition, which had been problematical in previously reported methods, were eliminated. The new eluent can be applied to C8, C18 and C30 ODS columns with the same effectiveness and with excellent repeatability. The proposed analytical method was successfully applied to extracts of rice flour certified reference materials.

  7. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  8. Nonlinear analysis of switched semi-active controlled systems

    NASA Astrophysics Data System (ADS)

    Eslaminasab, Nima; Vahid A., Orang; Golnaraghi, Farid

    2011-02-01

    Semi-active systems improve suspension performance of the vehicles more effectively than conventional passive systems by simultaneously improving ride comfort and road handling. Also, because of size, weight, price and performance advantages, they have gained more interest over the active as well as passive systems. Probably the most neglected aspect of the semi-active on-off control systems and strategies is the effects of the added nonlinearities of those systems, which are introduced and analysed in this paper. To do so, numerical techniques, analytical method of averaging and experimental analysis are deployed. In this paper, a new method to analyse, calculate and compare the performances of the semi-active controlled systems is proposed; further, a new controller based on the observations of actual test data is proposed to eliminate the adverse effects of added nonlinearities. The significance of the proposed new system is the simplicity of the algorithm and ease of implementation. In fact, this new semi-active control strategy could be easily adopted and used with most of the existing semi-active control systems.

  9. Pollen Image Recognition Based on DGDB-LBP Descriptor

    NASA Astrophysics Data System (ADS)

    Han, L. P.; Xie, Y. H.

    2018-01-01

    In this paper, we propose DGDB-LBP, a local binary pattern descriptor based on the pixel blocks in the dominant gradient direction. Differing from traditional LBP and its variants, DGDB-LBP encodes by comparing the main gradient magnitude of each block rather than the single pixel value or the average of pixel blocks, in doing so, it reduces the influence of noise on pollen images and eliminates redundant and non-informative features. In order to fully describe the texture features of pollen images and analyze it under multi-scales, we propose a new sampling strategy, which uses three types of operators to extract the radial, angular and multiple texture features under different scales. Considering that the pollen images have some degree of rotation under the microscope, we propose the adaptive encoding direction, which is determined by the texture distribution of local region. Experimental results on the Pollenmonitor dataset show that the average correct recognition rate of our method is superior to other pollen recognition methods in recent years.

  10. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

    NASA Astrophysics Data System (ADS)

    Sarojkumar, K.; Krishna, S.

    2016-08-01

    Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

  11. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    NASA Astrophysics Data System (ADS)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  12. A robust recognition and accurate locating method for circular coded diagonal target

    NASA Astrophysics Data System (ADS)

    Bao, Yunna; Shang, Yang; Sun, Xiaoliang; Zhou, Jiexin

    2017-10-01

    As a category of special control points which can be automatically identified, artificial coded targets have been widely developed in the field of computer vision, photogrammetry, augmented reality, etc. In this paper, a new circular coded target designed by RockeTech technology Corp. Ltd is analyzed and studied, which is called circular coded diagonal target (CCDT). A novel detection and recognition method with good robustness is proposed in the paper, and implemented on Visual Studio. In this algorithm, firstly, the ellipse features of the center circle are used for rough positioning. Then, according to the characteristics of the center diagonal target, a circular frequency filter is designed to choose the correct center circle and eliminates non-target noise. The precise positioning of the coded target is done by the correlation coefficient fitting extreme value method. Finally, the coded target recognition is achieved by decoding the binary sequence in the outer ring of the extracted target. To test the proposed algorithm, this paper has carried out simulation experiments and real experiments. The results show that the CCDT recognition and accurate locating method proposed in this paper can robustly recognize and accurately locate the targets in complex and noisy background.

  13. Applying simulation model to uniform field space charge distribution measurements by the PEA method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Salama, M.M.A.

    1996-12-31

    Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less

  14. Relativistic calculation of nuclear magnetic shielding using normalized elimination of the small component

    NASA Astrophysics Data System (ADS)

    Kudo, K.; Maeda, H.; Kawakubo, T.; Ootani, Y.; Funaki, M.; Fukui, H.

    2006-06-01

    The normalized elimination of the small component (NESC) theory, recently proposed by Filatov and Cremer [J. Chem. Phys. 122, 064104 (2005)], is extended to include magnetic interactions and applied to the calculation of the nuclear magnetic shielding in HX (X =F,Cl,Br,I) systems. The NESC calculations are performed at the levels of the zeroth-order regular approximation (ZORA) and the second-order regular approximation (SORA). The calculations show that the NESC-ZORA results are very close to the NESC-SORA results, except for the shielding of the I nucleus. Both the NESC-ZORA and NESC-SORA calculations yield very similar results to the previously reported values obtained using the relativistic infinite-order two-component coupled Hartree-Fock method. The difference between NESC-ZORA and NESC-SORA results is significant for the shieldings of iodine.

  15. A frequency-domain approach to improve ANNs generalization quality via proper initialization.

    PubMed

    Chaari, Majdi; Fekih, Afef; Seibi, Abdennour C; Hmida, Jalel Ben

    2018-08-01

    The ability to train a network without memorizing the input/output data, thereby allowing a good predictive performance when applied to unseen data, is paramount in ANN applications. In this paper, we propose a frequency-domain approach to evaluate the network initialization in terms of quality of training, i.e., generalization capabilities. As an alternative to the conventional time-domain methods, the proposed approach eliminates the approximate nature of network validation using an excess of unseen data. The benefits of the proposed approach are demonstrated using two numerical examples, where two trained networks performed similarly on the training and the validation data sets, yet they revealed a significant difference in prediction accuracy when tested using a different data set. This observation is of utmost importance in modeling applications requiring a high degree of accuracy. The efficiency of the proposed approach is further demonstrated on a real-world problem, where unlike other initialization methods, a more conclusive assessment of generalization is achieved. On the practical front, subtle methodological and implementational facets are addressed to ensure reproducibility and pinpoint the limitations of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. An AST-ELM Method for Eliminating the Influence of Charging Phenomenon on ECT.

    PubMed

    Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao

    2017-12-09

    Electrical capacitance tomography (ECT) is a promising imaging technology of permittivity distributions in multiphase flow. To reduce the effect of charging phenomenon on ECT measurement, an improved extreme learning machine method combined with adaptive soft-thresholding (AST-ELM) is presented and studied for image reconstruction. This method can provide a nonlinear mapping model between the capacitance values and medium distributions by using machine learning but not an electromagnetic-sensitive mechanism. Both simulation and experimental tests are carried out to validate the performance of the presented method, and reconstructed images are evaluated by relative error and correlation coefficient. The results have illustrated that the image reconstruction accuracy by the proposed AST-ELM method has greatly improved than that by the conventional methods under the condition with charging object.

  17. An AST-ELM Method for Eliminating the Influence of Charging Phenomenon on ECT

    PubMed Central

    Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao

    2017-01-01

    Electrical capacitance tomography (ECT) is a promising imaging technology of permittivity distributions in multiphase flow. To reduce the effect of charging phenomenon on ECT measurement, an improved extreme learning machine method combined with adaptive soft-thresholding (AST-ELM) is presented and studied for image reconstruction. This method can provide a nonlinear mapping model between the capacitance values and medium distributions by using machine learning but not an electromagnetic-sensitive mechanism. Both simulation and experimental tests are carried out to validate the performance of the presented method, and reconstructed images are evaluated by relative error and correlation coefficient. The results have illustrated that the image reconstruction accuracy by the proposed AST-ELM method has greatly improved than that by the conventional methods under the condition with charging object. PMID:29232850

  18. Local-learning-based neuron selection for grasping gesture prediction in motor brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Wang, Yiwen; Wang, Yueming; Wang, Fang; Hao, Yaoyao; Zhang, Shaomin; Zhang, Qiaosheng; Chen, Weidong; Zheng, Xiaoxiang

    2013-04-01

    Objective. The high-dimensional neural recordings bring computational challenges to movement decoding in motor brain machine interfaces (mBMI), especially for portable applications. However, not all recorded neural activities relate to the execution of a certain movement task. This paper proposes to use a local-learning-based method to perform neuron selection for the gesture prediction in a reaching and grasping task. Approach. Nonlinear neural activities are decomposed into a set of linear ones in a weighted feature space. A margin is defined to measure the distance between inter-class and intra-class neural patterns. The weights, reflecting the importance of neurons, are obtained by minimizing a margin-based exponential error function. To find the most dominant neurons in the task, 1-norm regularization is introduced to the objective function for sparse weights, where near-zero weights indicate irrelevant neurons. Main results. The signals of only 10 neurons out of 70 selected by the proposed method could achieve over 95% of the full recording's decoding accuracy of gesture predictions, no matter which different decoding methods are used (support vector machine and K-nearest neighbor). The temporal activities of the selected neurons show visually distinguishable patterns associated with various hand states. Compared with other algorithms, the proposed method can better eliminate the irrelevant neurons with near-zero weights and provides the important neuron subset with the best decoding performance in statistics. The weights of important neurons converge usually within 10-20 iterations. In addition, we study the temporal and spatial variation of neuron importance along a period of one and a half months in the same task. A high decoding performance can be maintained by updating the neuron subset. Significance. The proposed algorithm effectively ascertains the neuronal importance without assuming any coding model and provides a high performance with different decoding models. It shows better robustness of identifying the important neurons with noisy signals presented. The low demand of computational resources which, reflected by the fast convergence, indicates the feasibility of the method applied in portable BMI systems. The ascertainment of the important neurons helps to inspect neural patterns visually associated with the movement task. The elimination of irrelevant neurons greatly reduces the computational burden of mBMI systems and maintains the performance with better robustness.

  19. Ultrasensitive determination of cadmium in seawater by hollow fiber supported liquid membrane extraction coupled with graphite furnace atomic absorption spectrometry

    NASA Astrophysics Data System (ADS)

    Peng, Jin-feng; Liu, Rui; Liu, Jing-fu; He, Bin; Hu, Xia-lin; Jiang, Gui-bin

    2007-05-01

    A new procedure, based on hollow fiber supported liquid membrane preconcentration coupled with graphite furnace atomic absorption spectrometry (GFAAS) detection, was developed for the determination of trace Cd in seawater samples. With 1-octanol that contained a mixture of dithizone (carrier) and oleic acid immobilized in the pores of the polypropylene hollow fiber as a liquid membrane, Cd was selectively extracted from water samples into 0.05 M HNO 3 that filled the lumen of the hollow fiber as a stripping solution. The main extraction related parameters were optimized, and the effects of salinity and some coexisting interferants were also evaluated. Under the optimum extraction conditions, an enrichment factor of 387 was obtained for a 100-mL sample solution. In combination with graphite furnace atomic absorption spectrometry, a very low detection limit (0.8 ng L - 1 ) and a relative standard deviation (2.5% at 50 ng L - 1 level) were achieved. Five seawater samples were analyzed by the proposed method without dilution, with detected Cd concentration in the range of 56.4-264.8 ng L - 1 and the relative spiked recoveries over 89%. For comparison, these samples were also analyzed by the Inductively Coupled Plasma Mass Spectrometry (ICP-MS) method after a 10-fold dilution for matrix effect elimination. Statistical analysis with a one-way ANOVA shows no significant differences (at 0.05 level) between the results obtained by the proposed and ICP-MS methods. Additionally, analysis of certified reference materials (GBW (E) 080040) shows good agreement with the certified value. These results indicate that this present method is very sensitive and reliable, and can effectively eliminate complex matrix interferences in seawater samples.

  20. An optimized strain demodulation method for PZT driven fiber Fabry-Perot tunable filter

    NASA Astrophysics Data System (ADS)

    Sheng, Wenjuan; Peng, G. D.; Liu, Yang; Yang, Ning

    2015-08-01

    An optimized strain-demodulation-method based on piezo-electrical transducer (PZT) driven fiber Fabry-Perot (FFP) filter is proposed and experimentally demonstrated. Using a parallel processing mode to drive the PZT continuously, the hysteresis effect is eliminated, and the system demodulation rate is increased. Furthermore, an AC-DC compensation method is developed to address the intrinsic nonlinear relationship between the displacement and voltage of PZT. The experimental results show that the actual demodulation rate is improved from 15 Hz to 30 Hz, the random error of the strain measurement is decreased by 95%, and the deviation between the test values after compensation and the theoretical values is less than 1 pm/με.

  1. Possible roles of mechanical cell elimination intrinsic to growing tissues from the perspective of tissue growth efficiency and homeostasis.

    PubMed

    Lee, Sang-Woo; Morishita, Yoshihiro

    2017-07-01

    Cell competition is a phenomenon originally described as the competition between cell populations with different genetic backgrounds; losing cells with lower fitness are eliminated. With the progress in identification of related molecules, some reports described the relevance of cell mechanics during elimination. Furthermore, recent live imaging studies have shown that even in tissues composed of genetically identical cells, a non-negligible number of cells are eliminated during growth. Thus, mechanical cell elimination (MCE) as a consequence of mechanical cellular interactions is an unavoidable event in growing tissues and a commonly observed phenomenon. Here, we studied MCE in a genetically-homogeneous tissue from the perspective of tissue growth efficiency and homeostasis. First, we propose two quantitative measures, cell and tissue fitness, to evaluate cellular competitiveness and tissue growth efficiency, respectively. By mechanical tissue simulation in a pure population where all cells have the same mechanical traits, we clarified the dependence of cell elimination rate or cell fitness on different mechanical/growth parameters. In particular, we found that geometrical (specifically, cell size) and mechanical (stress magnitude) heterogeneities are common determinants of the elimination rate. Based on these results, we propose possible mechanical feedback mechanisms that could improve tissue growth efficiency and density/stress homeostasis. Moreover, when cells with different mechanical traits are mixed (e.g., in the presence of phenotypic variation), we show that MCE could drive a drastic shift in cell trait distribution, thereby improving tissue growth efficiency through the selection of cellular traits, i.e. intra-tissue "evolution". Along with the improvement of growth efficiency, cell density, stress state, and phenotype (mechanical traits) were also shown to be homogenized through growth. More theoretically, we propose a mathematical model that approximates cell competition dynamics, by which the time evolution of tissue fitness and cellular trait distribution can be predicted without directly simulating a cell-based mechanical model.

  2. Possible roles of mechanical cell elimination intrinsic to growing tissues from the perspective of tissue growth efficiency and homeostasis

    PubMed Central

    2017-01-01

    Cell competition is a phenomenon originally described as the competition between cell populations with different genetic backgrounds; losing cells with lower fitness are eliminated. With the progress in identification of related molecules, some reports described the relevance of cell mechanics during elimination. Furthermore, recent live imaging studies have shown that even in tissues composed of genetically identical cells, a non-negligible number of cells are eliminated during growth. Thus, mechanical cell elimination (MCE) as a consequence of mechanical cellular interactions is an unavoidable event in growing tissues and a commonly observed phenomenon. Here, we studied MCE in a genetically-homogeneous tissue from the perspective of tissue growth efficiency and homeostasis. First, we propose two quantitative measures, cell and tissue fitness, to evaluate cellular competitiveness and tissue growth efficiency, respectively. By mechanical tissue simulation in a pure population where all cells have the same mechanical traits, we clarified the dependence of cell elimination rate or cell fitness on different mechanical/growth parameters. In particular, we found that geometrical (specifically, cell size) and mechanical (stress magnitude) heterogeneities are common determinants of the elimination rate. Based on these results, we propose possible mechanical feedback mechanisms that could improve tissue growth efficiency and density/stress homeostasis. Moreover, when cells with different mechanical traits are mixed (e.g., in the presence of phenotypic variation), we show that MCE could drive a drastic shift in cell trait distribution, thereby improving tissue growth efficiency through the selection of cellular traits, i.e. intra-tissue “evolution”. Along with the improvement of growth efficiency, cell density, stress state, and phenotype (mechanical traits) were also shown to be homogenized through growth. More theoretically, we propose a mathematical model that approximates cell competition dynamics, by which the time evolution of tissue fitness and cellular trait distribution can be predicted without directly simulating a cell-based mechanical model. PMID:28704373

  3. Using Doppler Shifts of GPS Signals To Measure Angular Speed

    NASA Technical Reports Server (NTRS)

    Campbell, Charles E., Jr.

    2006-01-01

    A method has been proposed for extracting information on the rate of rotation of an aircraft, spacecraft, or other body from differential Doppler shifts of Global Positioning System (GPS) signals received by antennas mounted on the body. In principle, the method should be capable of yielding low-noise estimates of rates of rotation. The method could eliminate the need for gyroscopes to measure rates of rotation. The method is based on the fact that for a given signal of frequency ft transmitted by a given GPS satellite, the differential Doppler shift is attributable to the difference between those components of the instantaneous translational velocities of the antennas that lie along the line of sight from the antennas to the GPS satellite.

  4. Parasites and vectors carry no passport: how to fund cross-border and regional efforts to achieve malaria elimination

    PubMed Central

    2012-01-01

    Background Tremendous progress has been made in the last ten years in reducing morbidity and mortality caused by malaria, in part because of increases in global funding for malaria control and elimination. Today, many countries are striving for malaria elimination. However, a major challenge is the neglect of cross-border and regional initiatives in malaria control and elimination. This paper seeks to better understand Global Fund support for multi-country initiatives. Methods Documents and proposals were extracted and reviewed from two main sources, the Global Fund website and Aidspan.org. Documents and reports from the Global Fund Technical Review Panel, Board, and Secretariat documents such as guidelines and proposal templates were reviewed to establish the type of policies enacted and guidance provided from the Global Fund on multi-country initiatives and applications. From reviewing this information, the researchers created 29 variables according to eight dimensions to use in a review of Round 10 applications. All Round 10 multi-country applications (for HIV, malaria and tuberculosis) and all malaria multi-country applications (6) from Rounds 1 – 10 were extracted from the Global Fund website. A blind review was conducted of Round 10 applications using the 29 variables as a framework, followed by a review of four of the six successful malaria multi-country grant applications from Rounds 1 – 10. Findings During Rounds 3 – 10 of the Global Fund, only 5.8% of grants submitted were for multi-country initiatives. Out of 83 multi-country proposals submitted, 25.3% were approved by the Technical Review Panel (TRP) for funding, compared to 44.9% of single-country applications. The majority of approved multi-country applications were for HIV (76.2%), followed by malaria (19.0%), then tuberculosis (4.8%). TRP recommendations resulted in improvements to application forms, although guidance was generally vague. The in-depth review of Round 10 multi-country proposals showed that applicants described their projects in one of two ways: a regional ‘network approach’ by which benefits are derived from economies of scale or from enhanced opportunities for mutual support and learning or the development of common policies and approaches; or a ‘cross-border’ approach for enabling activities to be more effectively delivered towards border-crossing populations or vectors. In Round 10, only those with a ‘network approach’ were recommended for funding. The Global Fund has only ever approved six malaria multi-country applications. Four approved applications stated strong arguments for a multi-country initiative, combining both ‘cross-border’ and ‘network’ approaches. Conclusion With the cancellation of Round 11 and the proposal that the Global Fund adopt a more targeted and strategic approach to funding, the time is opportune for the Global Fund to develop a clear consensus about the key factors and criteria for funding malaria specific multi-country initiatives. This study found that currently there was a lack of guidance on the key features that a successful multi-country proposal needs to be approved and that applications directed towards the ‘network’ approach were most successful in Round 10. This type of multi-country proposal may favour other diseases such as HIV, whereas the need for malaria control and elimination is different, focusing on cross-border coordination and delivery of interventions to specific groups. The Global Fund should seek to address these issues and give better guidance to countries and regions and investigate disease-specific calls for multi-country and regional applications. PMID:23057734

  5. Research on infrared ship detection method in sea-sky background

    NASA Astrophysics Data System (ADS)

    Tang, Da; Sun, Gang; Wang, Ding-he; Niu, Zhao-dong; Chen, Zeng-ping

    2013-09-01

    An approach to infrared ship detection based on sea-sky-line(SSL) detection, ROI extraction and feature recognition is proposed in this paper. Firstly, considering that far ships are expected to be adjacent to the SSL, SSL is detected to find potential target areas. Radon transform is performed on gradient image to choose candidate SSLs, and detection result is given by fuzzy synthetic evaluation values. Secondly, in view of recognizable condition that there should be enough differences between target and background in infrared image, two gradient masks have been created and improved as practical guidelines in eliminating false alarm. Thirdly, extract ROI near the SSL by using multi-grade segmentation and fusion method after image sharpening, and unsuitable candidates are screened out according to the gradient masks and ROI shape. Finally, we segment the rest of ROIs by two-stage modified OTSU, and calculate target confidence as a standard measuring the facticity of target. Compared with other ship detection methods, proposed method is suitable for bipolar targets, which offers a good practicability and accuracy, and achieves a satisfying detection speed. Detection experiments with 200 thousand frames show that the proposed method is widely applicable, powerful in resistance to interferences and noises with a detection rate of above 95%, which satisfies the engineering needs commendably.

  6. Automatic detection of Martian dark slope streaks by machine learning using HiRISE images

    NASA Astrophysics Data System (ADS)

    Wang, Yexin; Di, Kaichang; Xin, Xin; Wan, Wenhui

    2017-07-01

    Dark slope streaks (DSSs) on the Martian surface are one of the active geologic features that can be observed on Mars nowadays. The detection of DSS is a prerequisite for studying its appearance, morphology, and distribution to reveal its underlying geological mechanisms. In addition, increasingly massive amounts of Mars high resolution data are now available. Hence, an automatic detection method for locating DSSs is highly desirable. In this research, we present an automatic DSS detection method by combining interest region extraction and machine learning techniques. The interest region extraction combines gradient and regional grayscale information. Moreover, a novel recognition strategy is proposed that takes the normalized minimum bounding rectangles (MBRs) of the extracted regions to calculate the Local Binary Pattern (LBP) feature and train a DSS classifier using the Adaboost machine learning algorithm. Comparative experiments using five different feature descriptors and three different machine learning algorithms show the superiority of the proposed method. Experimental results utilizing 888 extracted region samples from 28 HiRISE images show that the overall detection accuracy of our proposed method is 92.4%, with a true positive rate of 79.1% and false positive rate of 3.7%, which in particular indicates great performance of the method at eliminating non-DSS regions.

  7. Elimination of initial stress-induced curvature in a micromachined bi-material composite-layered cantilever

    NASA Astrophysics Data System (ADS)

    Liu, Ruiwen; Jiao, Binbin; Kong, Yanmei; Li, Zhigang; Shang, Haiping; Lu, Dike; Gao, Chaoqun; Chen, Dapeng

    2013-09-01

    Micro-devices with a bi-material-cantilever (BMC) commonly suffer initial curvature due to the mismatch of residual stress. Traditional corrective methods to reduce the residual stress mismatch generally involve the development of different material deposition recipes. In this paper, a new method for reducing residual stress mismatch in a BMC is proposed based on various previously developed deposition recipes. An initial material film is deposited using two or more developed deposition recipes. This first film is designed to introduce a stepped stress gradient, which is then balanced by overlapping a second material film on the first and using appropriate deposition recipes to form a nearly stress-balanced structure. A theoretical model is proposed based on both the moment balance principle and total equal strain at the interface of two adjacent layers. Experimental results and analytical models suggest that the proposed method is effective in producing multi-layer micro cantilevers that display balanced residual stresses. The method provides a generic solution to the problem of mismatched initial stresses which universally exists in micro-electro-mechanical systems (MEMS) devices based on a BMC. Moreover, the method can be incorporated into a MEMS design automation package for efficient design of various multiple material layer devices from MEMS material library and developed deposition recipes.

  8. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  9. 77 FR 51705 - Rescission of Quarterly Financial Reporting Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-27

    ... No. FMCSA-2012-0020] RIN-2126-AB48 Rescission of Quarterly Financial Reporting Requirements AGENCY...: FMCSA withdraws its June 27, 2012, direct final rule eliminating the quarterly financial reporting... future proposing the elimination of the quarterly financial reporting requirements for Form QFR and Form...

  10. 78 FR 5997 - Amendments to National Marine Sanctuary Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-28

    ...The National Oceanic and Atmospheric Administration (NOAA) proposes to amend the program regulations of the national marine sanctuaries. This rule would update and reorganize the existing regulations, eliminate redundancies across sanctuaries, eliminate outmoded regulations, adopt standard boundary descriptions, and consolidate general and permitting procedures.

  11. Relevant, irredundant feature selection and noisy example elimination.

    PubMed

    Lashkia, George V; Anthony, Laurence

    2004-04-01

    In many real-world situations, the method for computing the desired output from a set of inputs is unknown. One strategy for solving these types of problems is to learn the input-output functionality from examples in a training set. However, in many situations it is difficult to know what information is relevant to the task at hand. Subsequently, researchers have investigated ways to deal with the so-called problem of consistency of attributes, i.e., attributes that can distinguish examples from different classes. In this paper, we first prove that the notion of relevance of attributes is directly related to the consistency of attributes, and show how relevant, irredundant attributes can be selected. We then compare different relevant attribute selection algorithms, and show the superiority of algorithms that select irredundant attributes over those that select relevant attributes. We also show that searching for an "optimal" subset of attributes, which is considered to be the main purpose of attribute selection, is not the best way to improve the accuracy of classifiers. Employing sets of relevant, irredundant attributes improves classification accuracy in many more cases. Finally, we propose a new method for selecting relevant examples, which is based on filtering the so-called pattern frequency domain. By identifying examples that are nontypical in the determination of relevant, irredundant attributes, irrelevant examples can be eliminated prior to the learning process. Empirical results using artificial and real databases show the effectiveness of the proposed method in selecting relevant examples leading to improved performance even on greatly reduced training sets.

  12. Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow.

    PubMed

    Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong

    2015-01-01

    Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time.

  13. Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow

    PubMed Central

    Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong

    2015-01-01

    Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time. PMID:26351657

  14. A method for eliminating Faraday rotation in cryostat windows in longitudinal magneto-optical Kerr effect measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polewko-Klim, A., E-mail: anetapol@uwb.edu.pl; Uba, S.; Uba, L.

    2014-07-15

    A solution to the problem of disturbing effect of the background Faraday rotation in the cryostat windows on longitudinal magneto-optical Kerr effect (LMOKE) measured under vacuum conditions and/or at low temperatures is proposed. The method for eliminating the influence of Faraday rotation in cryostat windows is based on special arrangement of additional mirrors placed on sample holder. In this arrangement, the orientation of the cryostat window is perpendicular to the light beam direction and parallel to an external magnetic field generated by the H-frame electromagnet. The operation of the LMOKE magnetometer with the special sample holder based on polarization modulationmore » technique with a photo-elastic modulator is theoretically analyzed with the use of Jones matrices, and formulas for evaluating of the actual Kerr rotation and ellipticity of the sample are derived. The feasibility of the method and good performance of the magnetometer is experimentally demonstrated for the LMOKE effect measured in Fe/Au multilayer structures. The influence of imperfect alignment of the magnetometer setup on the Kerr angles, as derived theoretically through the analytic model and verified experimentally, is examined and discussed.« less

  15. Nonlinearly preconditioned semismooth Newton methods for variational inequality solution of two-phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Yang, Haijian; Sun, Shuyu; Yang, Chao

    2017-03-01

    Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.

  16. FastICA peel-off for ECG interference removal from surface EMG.

    PubMed

    Chen, Maoqi; Zhang, Xu; Chen, Xiang; Zhu, Mingxing; Li, Guanglin; Zhou, Ping

    2016-06-13

    Multi-channel recording of surface electromyographyic (EMG) signals is very likely to be contaminated by electrocardiographic (ECG) interference, specifically when the surface electrode is placed on muscles close to the heart. A novel fast independent component analysis (FastICA) based peel-off method is presented to remove ECG interference contaminating multi-channel surface EMG signals. Although demonstrating spatial variability in waveform shape, the ECG interference in different channels shares the same firing instants. Utilizing the firing information estimated from FastICA, ECG interference can be separated from surface EMG by a "peel off" processing. The performance of the method was quantified with synthetic signals by combining a series of experimentally recorded "clean" surface EMG and "pure" ECG interference. It was demonstrated that the new method can remove ECG interference efficiently with little distortion to surface EMG amplitude and frequency. The proposed method was also validated using experimental surface EMG signals contaminated by ECG interference. The proposed FastICA peel-off method can be used as a new and practical solution to eliminating ECG interference from multichannel EMG recordings.

  17. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  18. Ultrafast Fabrication of Flexible Dye-Sensitized Solar Cells by Ultrasonic Spray-Coating Technology

    PubMed Central

    Han, Hyun-Gyu; Weerasinghe, Hashitha C.; Min Kim, Kwang; Soo Kim, Jeong; Cheng, Yi-Bing; Jones, David J.; Holmes, Andrew B.; Kwon, Tae-Hyuk

    2015-01-01

    This study investigates novel deposition techniques for the preparation of TiO2 electrodes for use in flexible dye-sensitized solar cells. These proposed new methods, namely pre-dye-coating and codeposition ultrasonic spraying, eliminate the conventional need for time-consuming processes such as dye soaking and high-temperature sintering. Power conversion efficiencies of over 4.0% were achieved with electrodes prepared on flexible polymer substrates using this new deposition technology and N719 dye as a sensitizer. PMID:26420466

  19. Counting the stunted children in a population: a criticism of old and new approaches and a conciliatory proposal.

    PubMed

    Monteiro, C A

    1991-01-01

    Two methods for estimating the prevalence of growth retardation in a population are evaluated: the classical method, which is based on the proportion of children whose height is more than 2 standard deviations below the expected mean of a reference population; and a new method recently proposed by Mora, which is based on the whole height distribution of observed and reference populations. Application of the classical method to several simulated populations leads to the conclusion that in most situations in developing countries the prevalence of growth retardation is grossly underestimated, and reflects only the presence of severe growth deficits. A second constraint with this method is a marked reduction of the relative differentials between more and less exposed strata. Application of Mora's method to the same simulated populations reduced but did not eliminate these constraints. A novel method for estimating the prevalence of growth retardation, which is based also on the whole height distribution of observed and reference populations, is also described and evaluated. This method produces better estimates of the true prevalence of growth retardation with no reduction in relative differentials.

  20. A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System

    PubMed Central

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-01-01

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224

  1. Genotype-Based Association Mapping of Complex Diseases: Gene-Environment Interactions with Multiple Genetic Markers and Measurement Error in Environmental Exposures

    PubMed Central

    Lobach, Irvna; Fan, Ruzone; Carroll, Raymond T.

    2011-01-01

    With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequihbrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development. PMID:21031455

  2. An Integrated Ransac and Graph Based Mismatch Elimination Approach for Wide-Baseline Image Matching

    NASA Astrophysics Data System (ADS)

    Hasheminasab, M.; Ebadi, H.; Sedaghat, A.

    2015-12-01

    In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT) descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus) method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM) algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and capability.

  3. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    NASA Astrophysics Data System (ADS)

    Yang, Kuojun; Tian, Shulin; Zeng, Hao; Qiu, Lei; Guo, Lianping

    2014-04-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  4. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Kuojun, E-mail: kuojunyang@gmail.com; Guo, Lianping; School of Electrical and Electronic Engineering, Nanyang Technological University

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, whichmore » converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.« less

  5. Gauge invariance of phenomenological models of the interaction of quantum dissipative systems with electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Tokman, M. D.

    2009-05-01

    We discuss specific features of the electrodynamic characteristics of quantum systems within the framework of models that include a phenomenological description of the relaxation processes. As is shown by W. E. Lamb, Jr., R. R. Schlicher, and M. O. Scully [Phys. Rev. A 36, 2763 (1987)], the use of phenomenological relaxation operators, which adequately describe the attenuation of eigenvibrations of a quantum system, may lead to incorrect solutions in the presence of external electromagnetic fields determined by the vector potential for different resonance processes. This incorrectness can be eliminated by giving a gauge-invariant form to the relaxation operator. Lamb, Jr., proposed the corresponding gauge-invariant modification for the Weisskopf-Wigner relaxation operator, which is introduced directly into the Schrödinger equation within the framework of the two-level approximation. In the present paper, this problem is studied for the von Neumann equation supplemented by a relaxation operator. First, we show that the solution of the equation for the density matrix with the relaxation operator correctly obtained “from the first principles” has properties that ensure gauge invariance for the observables. Second, we propose a common recipe for transformation of the phenomenological relaxation operator into the correct (gauge-invariant) form in the density-matrix equations for a multilevel system. Also, we discuss the methods of elimination of other inaccuracies (not related to the gauge-invariance problem) which arise if the electrodynamic response of a dissipative quantum system is calculated within the framework of simplified relaxation models (first of all, the model corresponding to constant relaxation rates of coherences in quantum transitions). Examples illustrating the correctness of the results obtained within the framework of the proposed methods in contrast to inaccuracy of the results of the standard calculation techniques are given.

  6. Light-Assisted Advanced Oxidation Processes for the Elimination of Chemical and Microbiological Pollution of Wastewaters in Developed and Developing Countries.

    PubMed

    Giannakis, Stefanos; Rtimi, Sami; Pulgarin, Cesar

    2017-06-26

    In this work, the issue of hospital and urban wastewater treatment is studied in two different contexts, in Switzerland and in developing countries (Ivory Coast and Colombia). For this purpose, the treatment of municipal wastewater effluents is studied, simulating the developed countries' context, while cheap and sustainable solutions are proposed for the developing countries, to form a barrier between effluents and receiving water bodies. In order to propose proper methods for each case, the characteristics of the matrices and the targets are described here in detail. In both contexts, the use of Advanced Oxidation Processes (AOPs) is implemented, focusing on UV-based and solar-supported ones, in the respective target areas. A list of emerging contaminants and bacteria are firstly studied to provide operational and engineering details on their removal by AOPs. Fundamental mechanistic insights are also provided on the degradation of the effluent wastewater organic matter. The use of viruses and yeasts as potential model pathogens is also accounted for, treated by the photo-Fenton process. In addition, two pharmaceutically active compound (PhAC) models of hospital and/or industrial origin are studied in wastewater and urine, treated by all accounted AOPs, as a proposed method to effectively control concentrated point-source pollution from hospital wastewaters. Their elimination was modeled and the degradation pathway was elucidated by the use of state-of-the-art analytical techniques. In conclusion, the use of light-supported AOPs was proven to be effective in degrading the respective target and further insights were provided by each application, which could facilitate their divulgation and potential application in the field.

  7. Spin-orbit coupling calculations with the two-component normalized elimination of the small component method

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Zou, Wenli; Cremer, Dieter

    2013-07-01

    A new algorithm for the two-component Normalized Elimination of the Small Component (2cNESC) method is presented and tested in the calculation of spin-orbit (SO) splittings for a series of heavy atoms and their molecules. The 2cNESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac SO splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000), 10.1103/PhysRevB.62.7809]. The use of the screened nucleus potential for the two-electron SO interaction leads to accurate spinor energy splittings, for which the deviations from the accurate Dirac Fock-Coulomb values are on the average far below the deviations observed for other effective one-electron SO operators. For hydrogen halides HX (X = F, Cl, Br, I, At, and Uus) and mercury dihalides HgX2 (X = F, Cl, Br, I) trends in spinor energies and SO splittings as obtained with the 2cNESC method are analyzed and discussed on the basis of coupling schemes and the electronegativity of X.

  8. 77 FR 31906 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-30

    ... Eliminate the Fees Under Rule 7003(b) and Adopt a New Equities Regulatory Fee May 23, 2012. Pursuant to... Rule Change The Exchange proposes to eliminate the fees under Rule 7003(b) and replace them with a new...

  9. 76 FR 22399 - Proposed Data Collections Submitted for Public Comment and Recommendations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-21

    ... Severe Adverse Events Associated with Treatment of Latent Tuberculosis Infection--(0920-0773 exp. 04/31/ 2011)--Reinstatement with change--Division of Tuberculosis Elimination (DTBE), National Center for HIV... Prevention (CDC). Background and Brief Description As part of the national tuberculosis (TB) elimination...

  10. Automatic motion correction for in vivo human skin optical coherence tomography angiography through combined rigid and nonrigid registration

    NASA Astrophysics Data System (ADS)

    Wei, David Wei; Deegan, Anthony J.; Wang, Ruikang K.

    2017-06-01

    When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.

  11. Automatic motion correction for in vivo human skin optical coherence tomography angiography through combined rigid and nonrigid registration.

    PubMed

    Wei, David Wei; Deegan, Anthony J; Wang, Ruikang K

    2017-06-01

    When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.

  12. Hybrid ICA-Regression: Automatic Identification and Removal of Ocular Artifacts from Electroencephalographic Signals.

    PubMed

    Mannan, Malik M Naeem; Jeong, Myung Y; Kamran, Muhammad A

    2016-01-01

    Electroencephalography (EEG) is a portable brain-imaging technique with the advantage of high-temporal resolution that can be used to record electrical activity of the brain. However, it is difficult to analyze EEG signals due to the contamination of ocular artifacts, and which potentially results in misleading conclusions. Also, it is a proven fact that the contamination of ocular artifacts cause to reduce the classification accuracy of a brain-computer interface (BCI). It is therefore very important to remove/reduce these artifacts before the analysis of EEG signals for applications like BCI. In this paper, a hybrid framework that combines independent component analysis (ICA), regression and high-order statistics has been proposed to identify and eliminate artifactual activities from EEG data. We used simulated, experimental and standard EEG signals to evaluate and analyze the effectiveness of the proposed method. Results demonstrate that the proposed method can effectively remove ocular artifacts as well as it can preserve the neuronal signals present in EEG data. A comparison with four methods from literature namely ICA, regression analysis, wavelet-ICA (wICA), and regression-ICA (REGICA) confirms the significantly enhanced performance and effectiveness of the proposed method for removal of ocular activities from EEG, in terms of lower mean square error and mean absolute error values and higher mutual information between reconstructed and original EEG.

  13. Hybrid ICA—Regression: Automatic Identification and Removal of Ocular Artifacts from Electroencephalographic Signals

    PubMed Central

    Mannan, Malik M. Naeem; Jeong, Myung Y.; Kamran, Muhammad A.

    2016-01-01

    Electroencephalography (EEG) is a portable brain-imaging technique with the advantage of high-temporal resolution that can be used to record electrical activity of the brain. However, it is difficult to analyze EEG signals due to the contamination of ocular artifacts, and which potentially results in misleading conclusions. Also, it is a proven fact that the contamination of ocular artifacts cause to reduce the classification accuracy of a brain-computer interface (BCI). It is therefore very important to remove/reduce these artifacts before the analysis of EEG signals for applications like BCI. In this paper, a hybrid framework that combines independent component analysis (ICA), regression and high-order statistics has been proposed to identify and eliminate artifactual activities from EEG data. We used simulated, experimental and standard EEG signals to evaluate and analyze the effectiveness of the proposed method. Results demonstrate that the proposed method can effectively remove ocular artifacts as well as it can preserve the neuronal signals present in EEG data. A comparison with four methods from literature namely ICA, regression analysis, wavelet-ICA (wICA), and regression-ICA (REGICA) confirms the significantly enhanced performance and effectiveness of the proposed method for removal of ocular activities from EEG, in terms of lower mean square error and mean absolute error values and higher mutual information between reconstructed and original EEG. PMID:27199714

  14. Two techniques for eliminating luminol interference material and flow system configurations for luminol and firefly luciferase systems

    NASA Technical Reports Server (NTRS)

    Thomas, R. R.

    1976-01-01

    Two methods for eliminating luminol interference materials are described. One method eliminates interference from organic material by pre-reacting a sample with dilute hydrogen peroxide. The reaction rate resolution method for eliminating inorganic forms of interference is also described. The combination of the two methods makes the luminol system more specific for bacteria. Flow system designs for both the firefly luciferase and luminol bacteria detection systems are described. The firefly luciferase flow system incorporating nitric acid extraction and optimal dilutions has a functional sensitivity of 3 x 100,000 E. coli/ml. The luminol flow system incorporates the hydrogen peroxide pretreatment and the reaction rate resolution techniques for eliminating interference. The functional sensitivity of the luminol flow system is 1 x 10,000 E. coli/ml.

  15. Current harmonics elimination control method for six-phase PM synchronous motor drives.

    PubMed

    Yuan, Lei; Chen, Ming-liang; Shen, Jian-qing; Xiao, Fei

    2015-11-01

    To reduce the undesired 5th and 7th stator harmonic current in the six-phase permanent magnet synchronous motor (PMSM), an improved vector control algorithm was proposed based on vector space decomposition (VSD) transformation method, which can control the fundamental and harmonic subspace separately. To improve the traditional VSD technology, a novel synchronous rotating coordinate transformation matrix was presented in this paper, and only using the traditional PI controller in d-q subspace can meet the non-static difference adjustment, the controller parameter design method is given by employing internal model principle. Moreover, the current PI controller parallel with resonant controller is employed in x-y subspace to realize the specific 5th and 7th harmonic component compensation. In addition, a new six-phase SVPWM algorithm based on VSD transformation theory is also proposed. Simulation and experimental results verify the effectiveness of current decoupling vector controller. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Radiation characteristics and polarisation of undulated microstrip line antennas

    NASA Astrophysics Data System (ADS)

    Shafai, L.; Sebak, A. A.

    1985-12-01

    A numerical method is used to investigate the radiation from undulated microstrip line antennas. The undulated line is assumed to be suspended over a ground plane and its current distribution is determined using a moment method type solution. This current distribution is then used to compute the co-polar and cross-polar radiation fields. It is found that the current distribution has an oscillating behavior along the line, with a frequency which is twice the number of undulations. The cross-polarization is found to be high and relatively independent of the undulating shape. Its relative level, however, is reduced for large arrays, due to the array factor affecting the co-polar field. A procedure for the reduction or elimination of the cross-polarization is then proposed, which is based on utilizing two undulated lines with mutually inverted undulations. A design method for achieving low sidelobe levels is also proposed and a design example with sidelobes around the -40 dB range is presented.

  17. Proposed Issuance of NPDES Permit for NTUA Kayenta WWTF

    EPA Pesticide Factsheets

    Public Notice of proposed Issuance of National Pollutant Discharge Elimination System Permit (NPDES No. NN0020281) for Navajo Tribal Utility Authority (“NTUA”) Kayenta Wastewater Treatment Facility.

  18. 76 FR 9054 - Notice of Availability of Final Environmental Impact Statement for the AREVA Enrichment Services...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-16

    ... as supplemental information on a proposed electrical transmission line required to power the proposed... proposed uranium enrichment facility. Specifically, AES proposes to use gas centrifuge technology to enrich...; and (3) alternative technologies for uranium enrichment. These alternatives were eliminated from...

  19. WhitebalPR: automatic white balance by polarized reflections

    NASA Astrophysics Data System (ADS)

    Fischer, Gregor; Kolbe, Karin; Sajjaa, Matthias

    2008-02-01

    This new color constancy method is based on the polarization degree of that light which is reflected at the surface of an object. The subtraction of at least two images taken under different polarization directions detects the polarization degree of the neutrally reflected portions and eliminates the remitted non-polarized colored portions. Two experiments have been designed to clarify the performance of the procedure, one to multicolored objects and another to objects of different surface characteristics. The results show that the mechanism of eliminating the remitted, non-polarized colored portions of light works very fine. Independent from its color, different color pigments seem to be suitable for measuring the color of the illumination. The intensity and also the polarization degree of the reflected light depend on the surface properties significantly. The results exhibit a high accuracy of measuring the color of the illumination for glossy and matt surfaces. Only strongly scattering surfaces account for a weak signal level of the difference image and a reduced accuracy. An embodiment is proposed to integrate the new method into digital cameras.

  20. Control of birhythmicity: A self-feedback approach

    NASA Astrophysics Data System (ADS)

    Biswas, Debabrata; Banerjee, Tanmoy; Kurths, Jürgen

    2017-06-01

    Birhythmicity occurs in many natural and artificial systems. In this paper, we propose a self-feedback scheme to control birhythmicity. To establish the efficacy and generality of the proposed control scheme, we apply it on three birhythmic oscillators from diverse fields of natural science, namely, an energy harvesting system, the p53-Mdm2 network for protein genesis (the OAK model), and a glycolysis model (modified Decroly-Goldbeter model). Using the harmonic decomposition technique and energy balance method, we derive the analytical conditions for the control of birhythmicity. A detailed numerical bifurcation analysis in the parameter space establishes that the control scheme is capable of eliminating birhythmicity and it can also induce transitions between different forms of bistability. As the proposed control scheme is quite general, it can be applied for control of several real systems, particularly in biochemical and engineering systems.

  1. A joint precoding scheme for indoor downlink multi-user MIMO VLC systems

    NASA Astrophysics Data System (ADS)

    Zhao, Qiong; Fan, Yangyu; Kang, Bochao

    2017-11-01

    In this study, we aim to improve the system performance and reduce the implementation complexity of precoding scheme for visible light communication (VLC) systems. By incorporating the power-method algorithm and the block diagonalization (BD) algorithm, we propose a joint precoding scheme for indoor downlink multi-user multi-input-multi-output (MU-MIMO) VLC systems. In this scheme, we apply the BD algorithm to eliminate the co-channel interference (CCI) among users firstly. Secondly, the power-method algorithm is used to search the precoding weight for each user based on the optimal criterion of signal to interference plus noise ratio (SINR) maximization. Finally, the optical power restrictions of VLC systems are taken into account to constrain the precoding weight matrix. Comprehensive computer simulations in two scenarios indicate that the proposed scheme always has better bit error rate (BER) performance and lower computation complexity than that of the traditional scheme.

  2. Development of a methodology for the detection of hospital financial outliers using information systems.

    PubMed

    Okada, Sachiko; Nagase, Keisuke; Ito, Ayako; Ando, Fumihiko; Nakagawa, Yoshiaki; Okamoto, Kazuya; Kume, Naoto; Takemura, Tadamasa; Kuroda, Tomohiro; Yoshihara, Hiroyuki

    2014-01-01

    Comparison of financial indices helps to illustrate differences in operations and efficiency among similar hospitals. Outlier data tend to influence statistical indices, and so detection of outliers is desirable. Development of a methodology for financial outlier detection using information systems will help to reduce the time and effort required, eliminate the subjective elements in detection of outlier data, and improve the efficiency and quality of analysis. The purpose of this research was to develop such a methodology. Financial outliers were defined based on a case model. An outlier-detection method using the distances between cases in multi-dimensional space is proposed. Experiments using three diagnosis groups indicated successful detection of cases for which the profitability and income structure differed from other cases. Therefore, the method proposed here can be used to detect outliers. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Hyper thin 3D edge measurement of honeycomb core structures based on the triangular camera-projector layout & phase-based stereo matching.

    PubMed

    Jiang, Hongzhi; Zhao, Huijie; Li, Xudong; Quan, Chenggen

    2016-03-07

    We propose a novel hyper thin 3D edge measurement technique to measure the profile of 3D outer envelope of honeycomb core structures. The width of the edges of the honeycomb core is less than 0.1 mm. We introduce a triangular layout design consisting of two cameras and one projector to measure hyper thin 3D edges and eliminate data interference from the walls. A phase-shifting algorithm and the multi-frequency heterodyne phase-unwrapping principle are applied for phase retrievals on edges. A new stereo matching method based on phase mapping and epipolar constraint is presented to solve correspondence searching on the edges and remove false matches resulting in 3D outliers. Experimental results demonstrate the effectiveness of the proposed method for measuring the 3D profile of honeycomb core structures.

  4. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  5. Decentralized control of large flexible structures by joint decoupling

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Juang, Jer-Nan

    1994-01-01

    This paper presents a novel method to design decentralized controllers for large complex flexible structures by using the idea of joint decoupling. Decoupling of joint degrees of freedom from the interior degrees of freedom is achieved by setting the joint actuator commands to cancel the internal forces exerting on the joint degrees of freedom. By doing so, the interactions between substructures are eliminated. The global structure control design problem is then decomposed into several substructure control design problems. Control commands for interior actuators are set to be localized state feedback using decentralized observers for state estimation. The proposed decentralized controllers can operate successfully at the individual substructure level as well as at the global structure level. Not only control design but also control implementation is decentralized. A two-component mass-spring-damper system is used as an example to demonstrate the proposed method.

  6. Torque ripple reduction of brushless DC motor based on adaptive input-output feedback linearization.

    PubMed

    Shirvani Boroujeni, M; Markadeh, G R Arab; Soltani, J

    2017-09-01

    Torque ripple reduction of Brushless DC Motors (BLDCs) is an interesting subject in variable speed AC drives. In this paper at first, a mathematical expression for torque ripple harmonics is obtained. Then for a non-ideal BLDC motor with known harmonic contents of back-EMF, calculation of desired reference current amplitudes, which are required to eliminate some selected harmonics of torque ripple, are reviewed. In order to inject the reference harmonic currents to the motor windings, an Adaptive Input-Output Feedback Linearization (AIOFBL) control is proposed, which generates the reference voltages for three phases voltage source inverter in stationary reference frame. Experimental results are presented to show the capability and validity of the proposed control method and are compared with the vector control in Multi-Reference Frame (MRF) and Pseudo-Vector Control (P-VC) method results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. An ionospheric occultation inversion technique based on epoch difference

    NASA Astrophysics Data System (ADS)

    Lin, Jian; Xiong, Jing; Zhu, Fuying; Yang, Jian; Qiao, Xuejun

    2013-09-01

    Of the ionospheric radio occultation (IRO) electron density profile (EDP) retrievals, the Abel based calibrated TEC inversion (CTI) is the most widely used technique. In order to eliminate the contribution from the altitude above the RO satellite, it is necessary to utilize the calibrated TEC to retrieve the EDP, which introduces the error due to the coplanar assumption. In this paper, a new technique based on the epoch difference inversion (EDI) is firstly proposed to eliminate this error. The comparisons between CTI and EDI have been done, taking advantage of the simulated and real COSMIC data. The following conclusions can be drawn: the EDI technique can successfully retrieve the EDPs without non-occultation side measurements and shows better performance than the CTI method, especially for lower orbit mission; no matter which technique is used, the inversion results at the higher altitudes are better than those at the lower altitudes, which could be explained theoretically.

  8. Chemical elimination of the harmful properties of asbestos from military facilities.

    PubMed

    Pawełczyk, Adam; Božek, František; Grabas, Kazimierz; Chęcmanowski, Jacek

    2017-03-01

    This work presents research on the neutralization of asbestos banned from military use and its conversion to usable products. The studies showed that asbestos can be decomposed by the use of phosphoric acid. The process proved very effective when the phosphoric acid concentration was 30%, the temperature was 90°C and the reaction time 60min. Contrary to the common asbestos treatment method that consists of landfilling, the proposed process ensures elimination of the harmful properties of this waste material and its transformation into inert substances. The obtained products include calcium phosphate, magnesium phosphate and silica. Chemical, microscopic and X-ray analyses proved that the products are free of harmful fibers and can be, in particular, utilized for fertilizers production. The obtained results may contribute to development of an asbestos utilization technique that fits well into the European waste policy, regulated by the EU waste management law. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Toward a Fourth Generation of Disparities Research to Achieve Health Equity

    PubMed Central

    Thomas, Stephen B.; Quinn, Sandra Crouse; Butler, James; Fryer, Craig S.; Garza, Mary A.

    2011-01-01

    Achieving health equity, driven by the elimination of health disparities, is a goal of Healthy People 2020. In recent decades, the improvement in health status has been remarkable for the U.S. population as a whole. However, racial and ethnic minority populations continue to lag behind whites with a quality of life diminished by illness from preventable chronic diseases and a life span cut short by premature death. We examine a conceptual framework of three generations of health disparities research to understand (a) data trends, (b) factors driving disparities, and (c) solutions for closing the gap. We propose a new, fourth generation of research grounded in public health critical race praxis, utilizing comprehensive interventions to address race, racism, and structural inequalities and advancing evaluation methods to foster our ability to eliminate disparities. This new generation demands that we address the researcher’s own biases as part of the research process. PMID:21219164

  10. Novel technologies in doubled haploid line development.

    PubMed

    Ren, Jiaojiao; Wu, Penghao; Trampe, Benjamin; Tian, Xiaolong; Lübberstedt, Thomas; Chen, Shaojiang

    2017-11-01

    haploid inducer line can be transferred (DH) technology can not only shorten the breeding process but also increase genetic gain. Haploid induction and subsequent genome doubling are the two main steps required for DH technology. Haploids have been generated through the culture of immature male and female gametophytes, and through inter- and intraspecific via chromosome elimination. Here, we focus on haploidization via chromosome elimination, especially the recent advances in centromere-mediated haploidization. Once haploids have been induced, genome doubling is needed to produce DH lines. This study has proposed a new strategy to improve haploid genome doubling by combing haploids and minichromosome technology. With the progress in haploid induction and genome doubling methods, DH technology can facilitate reverse breeding, cytoplasmic male sterile (CMS) line production, gene stacking and a variety of other genetic analysis. © 2017 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  11. 76 FR 65431 - National Pollutant Discharge Elimination System (NPDES) Concentrated Animal Feeding Operation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-21

    ...-AF22 National Pollutant Discharge Elimination System (NPDES) Concentrated Animal Feeding Operation... co-proposes two options for obtaining basic information from CAFOs to support EPA in meeting its water quality protection responsibilities under the Clean Water Act (CWA). The purpose of this co...

  12. 76 FR 24406 - Collection by Offset From Indebted Government Employees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-02

    ... proposed regulations to eliminate the 10-year statute of limitations on collection of debt by... eliminate the 10-year statute of limitations on collection of debt by administrative offset, which includes... offset to collect a debt without time limitations on debt outstanding after the Government's right to...

  13. Distortion correction in EPI at ultra-high-field MRI using PSF mapping with optimal combination of shift detection dimension.

    PubMed

    Oh, Se-Hong; Chung, Jun-Young; In, Myung-Ho; Zaitsev, Maxim; Kim, Young-Bo; Speck, Oliver; Cho, Zang-Hee

    2012-10-01

    Despite its wide use, echo-planar imaging (EPI) suffers from geometric distortions due to off-resonance effects, i.e., strong magnetic field inhomogeneity and susceptibility. This article reports a novel method for correcting the distortions observed in EPI acquired at ultra-high-field such as 7 T. Point spread function (PSF) mapping methods have been proposed for correcting the distortions in EPI. The PSF shift map can be derived either along the nondistorted or the distorted coordinates. Along the nondistorted coordinates more information about compressed areas is present but it is prone to PSF-ghosting artifacts induced by large k-space shift in PSF encoding direction. In contrast, shift maps along the distorted coordinates contain more information in stretched areas and are more robust against PSF-ghosting. In ultra-high-field MRI, an EPI contains both compressed and stretched regions depending on the B0 field inhomogeneity and local susceptibility. In this study, we present a new geometric distortion correction scheme, which selectively applies the shift map with more information content. We propose a PSF-ghost elimination method to generate an artifact-free pixel shift map along nondistorted coordinates. The proposed method can correct the effects of the local magnetic field inhomogeneity induced by the susceptibility effects along with the PSF-ghost artifact cancellation. We have experimentally demonstrated the advantages of the proposed method in EPI data acquisitions in phantom and human brain using 7-T MRI. Copyright © 2011 Wiley Periodicals, Inc.

  14. High-kVp Assisted Metal Artifact Reduction for X-ray Computed Tomography

    PubMed Central

    Xi, Yan; Jin, Yannan; De Man, Bruno; Wang, Ge

    2016-01-01

    In X-ray computed tomography (CT), the presence of metallic parts in patients causes serious artifacts and degrades image quality. Many algorithms were published for metal artifact reduction (MAR) over the past decades with various degrees of success but without a perfect solution. Some MAR algorithms are based on the assumption that metal artifacts are due only to strong beam hardening and may fail in the case of serious photon starvation. Iterative methods handle photon starvation by discarding or underweighting corrupted data, but the results are not always stable and they come with high computational cost. In this paper, we propose a high-kVp-assisted CT scan mode combining a standard CT scan with a few projection views at a high-kVp value to obtain critical projection information near the metal parts. This method only requires minor hardware modifications on a modern CT scanner. Two MAR algorithms are proposed: dual-energy normalized MAR (DNMAR) and high-energy embedded MAR (HEMAR), aiming at situations without and with photon starvation respectively. Simulation results obtained with the CT simulator CatSim demonstrate that the proposed DNMAR and HEMAR methods can eliminate metal artifacts effectively. PMID:27891293

  15. Metabolic network visualization eliminating node redundance and preserving metabolic pathways

    PubMed Central

    Bourqui, Romain; Cottret, Ludovic; Lacroix, Vincent; Auber, David; Mary, Patrick; Sagot, Marie-France; Jourdan, Fabien

    2007-01-01

    Background The tools that are available to draw and to manipulate the representations of metabolism are usually restricted to metabolic pathways. This limitation becomes problematic when studying processes that span several pathways. The various attempts that have been made to draw genome-scale metabolic networks are confronted with two shortcomings: 1- they do not use contextual information which leads to dense, hard to interpret drawings, 2- they impose to fit to very constrained standards, which implies, in particular, duplicating nodes making topological analysis considerably more difficult. Results We propose a method, called MetaViz, which enables to draw a genome-scale metabolic network and that also takes into account its structuration into pathways. This method consists in two steps: a clustering step which addresses the pathway overlapping problem and a drawing step which consists in drawing the clustered graph and each cluster. Conclusion The method we propose is original and addresses new drawing issues arising from the no-duplication constraint. We do not propose a single drawing but rather several alternative ways of presenting metabolism depending on the pathway on which one wishes to focus. We believe that this provides a valuable tool to explore the pathway structure of metabolism. PMID:17608928

  16. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  17. Interference from familiar natural distractors is not eliminated by high perceptual load.

    PubMed

    He, Chunhong; Chen, Antao

    2010-05-01

    A crucial prediction of perceptual load theory is that high perceptual load can eliminate interference from distractors. However, Lavie et al. (Psychol Sci 14:510-515, 2003) found that high perceptual load did not eliminate interference when the distractor was a face. The current experiments examined the interaction between familiarity and perceptual load in modulating interference in a name search task. The data reveal that high perceptual load eliminated the interference effect for unfamiliar distractors that were faces or objects, but did not eliminate the interference for familiar distractors that were faces or objects. Based on these results, we proposed that the processing of familiar and natural stimuli may be immune to the effect of perceptual load.

  18. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  19. An improved recommendation algorithm via weakening indirect linkage effect

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Qiu, Tian; Shen, Xiao-Quan

    2015-07-01

    We propose an indirect-link-weakened mass diffusion method (IMD), by considering the indirect linkage and the source object heterogeneity effect in the mass diffusion (MD) recommendation method. Experimental results on the MovieLens, Netflix, and RYM datasets show that, the IMD method greatly improves both the recommendation accuracy and diversity, compared with a heterogeneity-weakened MD method (HMD), which only considers the source object heterogeneity. Moreover, the recommendation accuracy of the cold objects is also better elevated in the IMD than the HMD method. It suggests that eliminating the redundancy induced by the indirect linkages could have a prominent effect on the recommendation efficiency in the MD method. Project supported by the National Natural Science Foundation of China (Grant No. 11175079) and the Young Scientist Training Project of Jiangxi Province, China (Grant No. 20133BCB23017).

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Jialin, E-mail: 2004pjl@163.com; Zhang, Hongbo; Hu, Peijun

    Purpose: Efficient and accurate 3D liver segmentations from contrast-enhanced computed tomography (CT) images play an important role in therapeutic strategies for hepatic diseases. However, inhomogeneous appearances, ambiguous boundaries, and large variance in shape often make it a challenging task. The existence of liver abnormalities poses further difficulty. Despite the significant intensity difference, liver tumors should be segmented as part of the liver. This study aims to address these challenges, especially when the target livers contain subregions with distinct appearances. Methods: The authors propose a novel multiregion-appearance based approach with graph cuts to delineate the liver surface. For livers with multiplemore » subregions, a geodesic distance based appearance selection scheme is introduced to utilize proper appearance constraint for each subregion. A special case of the proposed method, which uses only one appearance constraint to segment the liver, is also presented. The segmentation process is modeled with energy functions incorporating both boundary and region information. Rather than a simple fixed combination, an adaptive balancing weight is introduced and learned from training sets. The proposed method only calls initialization inside the liver surface. No additional constraints from user interaction are utilized. Results: The proposed method was validated on 50 3D CT images from three datasets, i.e., Medical Image Computing and Computer Assisted Intervention (MICCAI) training and testing set, and local dataset. On MICCAI testing set, the proposed method achieved a total score of 83.4 ± 3.1, outperforming nonexpert manual segmentation (average score of 75.0). When applying their method to MICCAI training set and local dataset, it yielded a mean Dice similarity coefficient (DSC) of 97.7% ± 0.5% and 97.5% ± 0.4%, respectively. These results demonstrated the accuracy of the method when applied to different computed tomography (CT) datasets. In addition, user operator variability experiments showed its good reproducibility. Conclusions: A multiregion-appearance based method is proposed and evaluated to segment liver. This approach does not require prior model construction and so eliminates the burdens associated with model construction and matching. The proposed method provides comparable results with state-of-the-art methods. Validation results suggest that it may be suitable for the clinical use.« less

  1. Spectrum interrogation of fiber acoustic sensor based on self-fitting and differential method.

    PubMed

    Fu, Xin; Lu, Ping; Ni, Wenjun; Liao, Hao; Wang, Shun; Liu, Deming; Zhang, Jiangshan

    2017-02-20

    In this article, we propose an interrogation method of fiber acoustic sensor to recover the time-domain signal from the sensor spectrum. The optical spectrum of the sensor will show a ripple waveform when responding to acoustic signal due to the scanning process in a certain wavelength range. The reason behind this phenomenon is the dynamic variation of the sensor spectrum while the intensity of different wavelength is acquired at different time in a scanning period. The frequency components can be extracted from the ripple spectrum assisted by the wavelength scanning speed. The signal is able to be recovered by differential between the ripple spectrum and its self-fitted curve. The differential process can eliminate the interference caused by environmental perturbations such as temperature or refractive index (RI), etc. The proposed method is appropriate for fiber acoustic sensors based on gratings or interferometers. A long period grating (LPG) is adopted as an acoustic sensor head to prove the feasibility of the interrogation method in experiment. The ability to compensate the environmental fluctuations is also demonstrated.

  2. Color filter array pattern identification using variance of color difference image

    NASA Astrophysics Data System (ADS)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  3. Reinforcement Learning Trees

    PubMed Central

    Zhu, Ruoqing; Zeng, Donglin; Kosorok, Michael R.

    2015-01-01

    In this paper, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman, 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree utilizes the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that towards terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. PMID:26903687

  4. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    NASA Astrophysics Data System (ADS)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  5. An improved wrapper-based feature selection method for machinery fault diagnosis

    PubMed Central

    2017-01-01

    A major issue of machinery fault diagnosis using vibration signals is that it is over-reliant on personnel knowledge and experience in interpreting the signal. Thus, machine learning has been adapted for machinery fault diagnosis. The quantity and quality of the input features, however, influence the fault classification performance. Feature selection plays a vital role in selecting the most representative feature subset for the machine learning algorithm. In contrast, the trade-off relationship between capability when selecting the best feature subset and computational effort is inevitable in the wrapper-based feature selection (WFS) method. This paper proposes an improved WFS technique before integration with a support vector machine (SVM) model classifier as a complete fault diagnosis system for a rolling element bearing case study. The bearing vibration dataset made available by the Case Western Reserve University Bearing Data Centre was executed using the proposed WFS and its performance has been analysed and discussed. The results reveal that the proposed WFS secures the best feature subset with a lower computational effort by eliminating the redundancy of re-evaluation. The proposed WFS has therefore been found to be capable and efficient to carry out feature selection tasks. PMID:29261689

  6. A new kernel-based fuzzy level set method for automated segmentation of medical images in the presence of intensity inhomogeneity.

    PubMed

    Rastgarpour, Maryam; Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.

  7. Last-position elimination-based learning automata.

    PubMed

    Zhang, Junqi; Wang, Cheng; Zhou, MengChu

    2014-12-01

    An update scheme of the state probability vector of actions is critical for learning automata (LA). The most popular is the pursuit scheme that pursues the estimated optimal action and penalizes others. This paper proposes a reverse philosophy that leads to last-position elimination-based learning automata (LELA). The action graded last in terms of the estimated performance is penalized by decreasing its state probability and is eliminated when its state probability becomes zero. All active actions, that is, actions with nonzero state probability, equally share the penalized state probability from the last-position action at each iteration. The proposed LELA is characterized by the relaxed convergence condition for the optimal action, the accelerated step size of the state probability update scheme for the estimated optimal action, and the enriched sampling for the estimated nonoptimal actions. The proof of the ϵ-optimal property for the proposed algorithm is presented. Last-position elimination is a widespread philosophy in the real world and has proved to be also helpful for the update scheme of the learning automaton via the simulations of well-known benchmark environments. In the simulations, two versions of the LELA, using different selection strategies of the last action, are compared with the classical pursuit algorithms Discretized Pursuit Reward-Inaction (DP(RI)) and Discretized Generalized Pursuit Algorithm (DGPA). Simulation results show that the proposed schemes achieve significantly faster convergence and higher accuracy than the classical ones. Specifically, the proposed schemes reduce the interval to find the best parameter for a specific environment in the classical pursuit algorithms. Thus, they can have their parameter tuning easier to perform and can save much more time when applied to a practical case. Furthermore, the convergence curves and the corresponding variance coefficient curves of the contenders are illustrated to characterize their essential differences and verify the analysis results of the proposed algorithms.

  8. Duplicate Record Elimination in Large Data Files.

    DTIC Science & Technology

    1981-08-01

    UNCLASSIFIJED CSTR -445 NL LmEE~hhE - I1.0 . 111112----5 1.~4 __112 ___IL25_ 1.4 111111.6 EI24 COMPUTER SCIENCES DEPARTMENT oUniversity of Wisconsin...we propose a combinatorial model for the use in the analysis of algorithms for duplicate elimination. We contend that this model can serve as a...duplicates in a multiset of records, knowing the size of the multiset and the number of distinct records in it. 3. Algorithms for Duplicate Elimination

  9. Eliminating Undesirable Variation in Neonatal Practice: Balancing Standardization and Customization.

    PubMed

    Balakrishnan, Maya; Raghavan, Aarti; Suresh, Gautham K

    2017-09-01

    Consistency of care and elimination of unnecessary and harmful variation are underemphasized aspects of health care quality. This article describes the prevalence and patterns of practice variation in health care and neonatology; discusses the potential role of standardization as a solution to eliminating wasteful and harmful practice variation, particularly when it is founded on principles of evidence-based medicine; and proposes ways to balance standardization and customization of practice to ultimately improve the quality of neonatal care. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Carbon elimination from silicon kerf: Thermogravimetric analysis and mechanistic considerations

    NASA Astrophysics Data System (ADS)

    Vazquez-Pufleau, Miguel; Chadha, Tandeep S.; Yablonsky, Gregory; Biswas, Pratim

    2017-01-01

    40% of ultrapure silicon is lost as kerf during slicing to produce wafers. Kerf is currently not being recycled due to engineering challenges and costs associated with removing its abundant impurities. Carbon left behind from the lubricant remains as one of the most difficult contaminants to remove in kerf without significant silicon oxidation. The present work enables to better understand the mechanism of carbon elimination in kerf which can aid the design of better processes for kef recycling and low cost photovoltaics. In this paper, we studied the kinetics of carbon elimination from silicon kerf in two atmospheres: air and N2, under a regime of no-diffusion-limitation. We report the apparent activation energy in both atmospheres using three methods: Kissinger, and two isoconversional approaches. In both atmospheres, a bimodal apparent activation energy is observed, suggesting a two stage process. A reaction mechanism is proposed in which (a) C-C and C-O bond cleavage reactions occur in parallel with polymer formation; (b) at higher temperatures, this polymer fully degrades in air but leaves a tarry residue in N2 that accounts for about 12% of the initial total carbon.

  11. Carbon elimination from silicon kerf: Thermogravimetric analysis and mechanistic considerations

    PubMed Central

    Vazquez-Pufleau, Miguel; Chadha, Tandeep S.; Yablonsky, Gregory; Biswas, Pratim

    2017-01-01

    40% of ultrapure silicon is lost as kerf during slicing to produce wafers. Kerf is currently not being recycled due to engineering challenges and costs associated with removing its abundant impurities. Carbon left behind from the lubricant remains as one of the most difficult contaminants to remove in kerf without significant silicon oxidation. The present work enables to better understand the mechanism of carbon elimination in kerf which can aid the design of better processes for kef recycling and low cost photovoltaics. In this paper, we studied the kinetics of carbon elimination from silicon kerf in two atmospheres: air and N2, under a regime of no-diffusion-limitation. We report the apparent activation energy in both atmospheres using three methods: Kissinger, and two isoconversional approaches. In both atmospheres, a bimodal apparent activation energy is observed, suggesting a two stage process. A reaction mechanism is proposed in which (a) C-C and C-O bond cleavage reactions occur in parallel with polymer formation; (b) at higher temperatures, this polymer fully degrades in air but leaves a tarry residue in N2 that accounts for about 12% of the initial total carbon. PMID:28098187

  12. Impact of substrate etching on plasmonic elements and metamaterials: preventing red shift and improving refractive index sensitivity.

    PubMed

    Moritake, Yuto; Tanaka, Takuo

    2018-02-05

    We propose and demonstrate the elimination of substrate influence on plasmon resonance by using selective and isotropic etching of substrates. Preventing the red shift of the resonance due to substrates and improving refractive index sensitivity were experimentally demonstrated by using plasmonic nanostructures fabricated on silicon substrates. Applying substrate etching decreases the effective refractive index around the metal nanostructures, resulting in elimination of the red shift. Improvement of sensitivity to the refractive index environment was demonstrated by using plasmonic metamaterials with Fano resonance based on far field interference. Change in quality factors (Q-factors) of the Fano resonance by substrate etching was also investigated in detail. The presence of a closely positioned substrate distorts the electric field distribution and degrades the Q-factors. Substrate etching dramatically increased the refractive index sensitivity reaching to 1532 nm/RIU since the electric fields under the nanostructures became accessible through substrate etching. The FOM was improved compared to the case without the substrate etching. The method presented in this paper is applicable to a variety of plasmonic structures to eliminate the influence of substrates for realizing high performance plasmonic devices.

  13. Carbon elimination from silicon kerf: Thermogravimetric analysis and mechanistic considerations

    DOE PAGES

    Vazquez-Pufleau, Miguel; Chadha, Tandeep S.; Yablonsky, Gregory; ...

    2017-01-18

    40% of ultrapure silicon is lost as kerf during slicing to produce wafers. Currently, kerf is not recycled due to engineering challenges and costs associated with removing its abundant impurities. Carbon left behind from the lubricant remains as one of the most difficult contaminants to remove in kerf without significant silicon oxidation. The present work enables to better understand the mechanism of carbon elimination in kerf which can aid the design of better processes for kef recycling and low cost photovoltaics. In this paper, we studied the kinetics of carbon elimination from silicon kerf in two atmospheres: air and Nmore » 2, under a regime of no-diffusion-limitation. Here, we report the apparent activation energy in both atmospheres using three methods: Kissinger, and two isoconversional approaches. In both atmospheres, a bimodal apparent activation energy is observed, suggesting a two stage process. Furthermore, a reaction mechanism is proposed in which (a) C-C and C-O bond cleavage reactions occur in parallel with polymer formation; (b) at higher temperatures, this polymer fully degrades in air but leaves a tarry residue in N 2 that accounts for about 12% of the initial total carbon.« less

  14. Multi-stage approach for structural damage detection problem using basis pursuit and particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Gerist, Saleheh; Maheri, Mahmoud R.

    2016-12-01

    In order to solve structural damage detection problem, a multi-stage method using particle swarm optimization is presented. First, a new spars recovery method, named Basis Pursuit (BP), is utilized to preliminarily identify structural damage locations. The BP method solves a system of equations which relates the damage parameters to the structural modal responses using the sensitivity matrix. Then, the results of this stage are subsequently enhanced to the exact damage locations and extents using the PSO search engine. Finally, the search space is reduced by elimination of some low damage variables using micro search (MS) operator embedded in the PSO algorithm. To overcome the noise present in structural responses, a method known as Basis Pursuit De-Noising (BPDN) is also used. The efficiency of the proposed method is investigated by three numerical examples: a cantilever beam, a plane truss and a portal plane frame. The frequency response is used to detect damage in the examples. The simulation results demonstrate the accuracy and efficiency of the proposed method in detecting multiple damage cases and exhibit its robustness regarding noise and its advantages compared to other reported solution algorithms.

  15. Slope angle estimation method based on sparse subspace clustering for probe safe landing

    NASA Astrophysics Data System (ADS)

    Li, Haibo; Cao, Yunfeng; Ding, Meng; Zhuang, Likui

    2018-06-01

    To avoid planetary probes landing on steep slopes where they may slip or tip over, a new method of slope angle estimation based on sparse subspace clustering is proposed to improve accuracy. First, a coordinate system is defined and established to describe the measured data of light detection and ranging (LIDAR). Second, this data is processed and expressed with a sparse representation. Third, on this basis, the data is made to cluster to determine which subspace it belongs to. Fourth, eliminating outliers in subspace, the correct data points are used for the fitting planes. Finally, the vectors normal to the planes are obtained using the plane model, and the angle between the normal vectors is obtained through calculation. Based on the geometric relationship, this angle is equal in value to the slope angle. The proposed method was tested in a series of experiments. The experimental results show that this method can effectively estimate the slope angle, can overcome the influence of noise and obtain an exact slope angle. Compared with other methods, this method can minimize the measuring errors and further improve the estimation accuracy of the slope angle.

  16. An approach for the regularization of a power flow solution around the maximum loading point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kataoka, Y.

    1992-08-01

    In the conventional power flow solution, the boundary conditions are directly specified by active power and reactive power at each node, so that the singular point coincided with the maximum loading point. For this reason, the computations are often disturbed by ill-condition. This paper proposes a new method for getting the wide-range regularity by giving some modifications to the conventional power flow solution method, thereby eliminating the singular point or shifting it to the region with the voltage lower than that of the maximum loading point. Then, the continuous execution of V-P curves including maximum loading point is realized. Themore » efficiency and effectiveness of the method are tested in practical 598-nodes system in comparison with the conventional method.« less

  17. Unconventional minimal subtraction and Bogoliubov-Parasyuk-Hepp-Zimmermann method: Massive scalar theory and critical exponents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carvalho, Paulo R. S.; Leite, Marcelo M.

    2013-09-15

    We introduce a simpler although unconventional minimal subtraction renormalization procedure in the case of a massive scalar λφ{sup 4} theory in Euclidean space using dimensional regularization. We show that this method is very similar to its counterpart in massless field theory. In particular, the choice of using the bare mass at higher perturbative order instead of employing its tree-level counterpart eliminates all tadpole insertions at that order. As an application, we compute diagrammatically the critical exponents η and ν at least up to two loops. We perform an explicit comparison with the Bogoliubov-Parasyuk-Hepp-Zimmermann (BPHZ) method at the same loop order,more » show that the proposed method requires fewer diagrams and establish a connection between the two approaches.« less

  18. Optimization of an underwater in-situ LaBr3:Ce spectrometer with energy self-calibration and efficiency calibration.

    PubMed

    Zeng, Zhi; Pan, Xingyu; Ma, Hao; He, Jianhua; Cang, Jirong; Zeng, Ming; Mi, Yuhao; Cheng, Jianping

    2017-03-01

    An underwater in-situ gamma-ray spectrometer based on LaBr 3 :Ce was developed and optimized to monitor marine radioactivity. The intrinsic background mainly from 138 La and 227 Ac of LaBr 3 :Ce was well determined by low background measurement and pulse shape discrimination method. A method of self-calibration using three internal contaminant peaks was proposed to eliminate the peak shift during long-term monitoring. With experiments under different temperatures, the method was proved to be helpful for maintaining long-term stability. To monitor the marine radioactivity, the spectrometer's efficiency was calculated via water tank experiment as well as Monte Carlo simulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Single-anchor support and supercritical CO2 drying enable high-precision microfabrication of three-dimensional structures.

    PubMed

    Maruo, Shoji; Hasegawa, Takuya; Yoshimura, Naoki

    2009-11-09

    In high-precision two-photon microfabrication of three-dimensional (3-D) polymeric microstructures, supercritical CO(2) drying was employed to reduce surface tension, which tends to cause the collapse of micro/nano structures. Use of supercritical drying allowed high-aspect ratio microstructures, such as micropillars and cantilevers, to be fabricated. We also propose a single-anchor supporting method to eliminate non-uniform shrinkage of polymeric structures otherwise caused by attachment to the substrate. Use of this method permitted frame models such as lattices to be produced without harmful distortion. The combination of supercritical CO(2) drying and the single-anchor supporting method offers reliable high-precision microfabrication of sophisticated, fragile 3-D micro/nano structures.

  20. Implementation of a novel efficient low cost method in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Asadi, S.; Sepehry, N.; Shamshirsaz, M.; Vaghasloo, Y. A.

    2017-05-01

    In active structural health monitoring (SHM) methods, it is necessary to excite the structure with a preselected signal. More studies in the field of active SHM are focused on applying SHM on higher frequency ranges since it is possible to detect smaller damages, using higher excitation frequency. Also, to increase spatial domain of measurements and enhance signal to noise ratio (SNR), the amplitude of excitation signal is usually amplified. These issues become substantial where piezoelectric transducers with relatively high capacitance are used and consequently, need to utilize high power amplifiers becomes predominant. In this paper, a novel method named Step Excitation Method (SEM) is proposed and implemented for Lamb wave and transfer impedance-based SHM for damage detection in structures. Three different types of structure are studied: beam, plate and pipe. The related hardware is designed and fabricated which eliminates high power analog amplifiers and decreases complexity of driver significantly. Spectral Finite Element Method (SFEM) is applied to examine performance of proposed SEM. In proposed method, by determination of impulse response of the system, any input could be applied to the system by both finite element simulations and experiments without need for multiple measurements. The experimental results using SEM are compared with those obtained by conventional direct excitation method for healthy and damaged structures. The results show an improvement of amplitude resolution in damage detection comparing to conventional method which is due to achieving an SNR improvement up to 50%.

  1. Non-contact data access with direction identification for industrial differential serial bus

    NASA Astrophysics Data System (ADS)

    Xie, Kai; Li, Xiaoping; Zhang, Hanlu; Yang, Ming; Ye, Yinghao

    2013-06-01

    We propose a non-contact method for accessing data in industrial differential serial bus applications, which could serve as an effective and safe online testing and diagnosing tool. The data stream and the transmission direction are reconstructed simultaneously from the near-field emanations of a twisted pair, eliminating direct contact with the actual conductors, and avoiding damage to the insulation (only the outer sheathing is removed). A non-contact probe with the ability to sense electric and magnetic fields is presented, as are theories for data reconstruction, direction identification, and a circuit implementation. The prototype was built using inexpensive components and then tested on a standard RS-485 industrial serial bus. Experimental results verified the validity of the proposed scheme.

  2. Pure-type superconducting permanent-magnet undulator.

    PubMed

    Tanaka, Takashi; Tsuru, Rieko; Kitamura, Hideo

    2005-07-01

    A novel synchrotron radiation source is proposed that utilizes bulk-type high-temperature superconductors (HTSCs) as permanent magnets (PMs) by in situ magnetization. Arrays of HTSC blocks magnetized by external magnetic fields are placed below and above the electron path instead of conventional PMs, generating a periodic magnetic field with an offset. Two methods are presented to magnetize the HTSCs and eliminate the field offset, enabling the HTSC arrays to work as a synchrotron radiation source. An analytical formula to calculate the peak field achieved in a device based on this scheme is derived in a two-dimensional form for comparison with synchrotron radiation sources using conventional PMs. Experiments were performed to demonstrate the principle of the proposed scheme and the results have been found to be very promising.

  3. Optical vector network analyzer based on double-sideband modulation.

    PubMed

    Jun, Wen; Wang, Ling; Yang, Chengwu; Li, Ming; Zhu, Ning Hua; Guo, Jinjin; Xiong, Liangming; Li, Wei

    2017-11-01

    We report an optical vector network analyzer (OVNA) based on double-sideband (DSB) modulation using a dual-parallel Mach-Zehnder modulator. The device under test (DUT) is measured twice with different modulation schemes. By post-processing the measurement results, the response of the DUT can be obtained accurately. Since DSB modulation is used in our approach, the measurement range is doubled compared with conventional single-sideband (SSB) modulation-based OVNA. Moreover, the measurement accuracy is improved by eliminating the even-order sidebands. The key advantage of the proposed scheme is that the measurement of a DUT with bandpass response can also be simply realized, which is a big challenge for the SSB-based OVNA. The proposed method is theoretically and experimentally demonstrated.

  4. Intelligent Process Abnormal Patterns Recognition and Diagnosis Based on Fuzzy Logic.

    PubMed

    Hou, Shi-Wang; Feng, Shunxiao; Wang, Hui

    2016-01-01

    Locating the assignable causes by use of the abnormal patterns of control chart is a widely used technology for manufacturing quality control. If there are uncertainties about the occurrence degree of abnormal patterns, the diagnosis process is impossible to be carried out. Considering four common abnormal control chart patterns, this paper proposed a characteristic numbers based recognition method point by point to quantify the occurrence degree of abnormal patterns under uncertain conditions and a fuzzy inference system based on fuzzy logic to calculate the contribution degree of assignable causes with fuzzy abnormal patterns. Application case results show that the proposed approach can give a ranked causes list under fuzzy control chart abnormal patterns and support the abnormity eliminating.

  5. Birefringence dispersion compensation demodulation algorithm for polarized low-coherence interferometry.

    PubMed

    Wang, Shuang; Liu, Tiegen; Jiang, Junfeng; Liu, Kun; Yin, Jinde; Wu, Fan

    2013-08-15

    A demodulation algorithm based on the birefringence dispersion characteristics for a polarized low-coherence interferometer is proposed. With the birefringence dispersion parameter taken into account, the mathematical model of the polarized low-coherence interference fringes is established and used to extract phase shift information between the measured coherence envelope center and the zero-order fringe, which eliminates the interferometric 2 π ambiguity of locating the zero-order fringe. A pressure measurement experiment using an optical fiber Fabry-Perot pressure sensor was carried out to verify the effectiveness of the proposed algorithm. The experiment result showed that the demodulation precision was 0.077 kPa in the range of 210 kPa, which was improved by 23 times compared to the traditional envelope detection method.

  6. Design of "Eye Closure" system for the stealth of photo-electric equipments

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Hua, W. S.; Li, G.

    2012-10-01

    Based on the optical activity of liquid crystal, a new approach for the stealth of "cat's eye" targets is proposed in this paper. It imitates the physiological close reaction of human eyes when strong light irradiates eyes. With this approach, the "cat's eye" effect will vanish, which is applied in restricting photo-electric equipments being detected and located by active laser detection system. The structure and working principle of the design are presented. The drive circuit is given to control the optical switch automatically. Feasibility of this design is demonstrated by experimental method. The measured data illustrate that the proposed approach is effective to eliminate the "cat's eye" effect, so as to enhancing the viability of photo-electric equipments on the battlefield.

  7. The limitation of the proposed collection efficiency for fiber probes on the visible and near-infrared diffuse spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Linna; Ding, Hongyan; Lin, Ling; Wang, Yimin; Guo, Xin

    2017-12-01

    A fiber is usually used as a probe in visible and near-infrared diffuse spectra measurement. However, the use of different fiber probes in the same measurement may cause data mismatch problems. Our group has researched the influence of the parameters of fiber probe, including the aperture angle, on the diffuse spectrum by a modified Monte Carlo model. To eliminate the influence of the aperture angle, we proposed a fitted equation of correction coefficient to correct its difference in practical range. However, we did not discuss the limitation of this method. In this work, we explored the collection efficiency in different optical environment with Monte Carlo simulation method, and find the suitable conditions-weak absorbing and strong scattering media, for the proposed collection efficiency. Furthermore, we tried to explain the stability of the collection efficiency in this condition. This work gives suitable conditions for the collection efficiency. The use of collection efficiency can help reduce the influence of different measurement systems and is also helpful to the model translation.

  8. Handling Neighbor Discovery and Rendezvous Consistency with Weighted Quorum-Based Approach

    PubMed Central

    Own, Chung-Ming; Meng, Zhaopeng; Liu, Kehan

    2015-01-01

    Neighbor discovery and the power of sensors play an important role in the formation of Wireless Sensor Networks (WSNs) and mobile networks. Many asynchronous protocols based on wake-up time scheduling have been proposed to enable neighbor discovery among neighboring nodes for the energy saving, especially in the difficulty of clock synchronization. However, existing researches are divided two parts with the neighbor-discovery methods, one is the quorum-based protocols and the other is co-primality based protocols. Their distinction is on the arrangements of time slots, the former uses the quorums in the matrix, the latter adopts the numerical analysis. In our study, we propose the weighted heuristic quorum system (WQS), which is based on the quorum algorithm to eliminate redundant paths of active slots. We demonstrate the specification of our system: fewer active slots are required, the referring rate is balanced, and remaining power is considered particularly when a device maintains rendezvous with discovered neighbors. The evaluation results showed that our proposed method can effectively reschedule the active slots and save the computing time of the network system. PMID:26404297

  9. 76 FR 40969 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of a Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-12

    ... risk management strategies and decisions. Furthermore, the Exchange has had to eliminate option classes... additional short term option classes for investment, trading, and risk management purposes. Finally, the... that various strategies that the investor put into play were disrupted and eliminated when the class...

  10. [Application of thermosetting plastics to eliminate undercuts].

    PubMed

    Bielawski, T

    1989-01-01

    The author proposes to utilize the properties of thermosetting plastics used in other fields to use them in prosthetics in order to eliminate undercuts. Application of extra equipment in claspograph in the form of studs of three dimension makes formation of undercuts' blockade easier improving the result of work at the same time.

  11. 75 FR 41790 - Address Management Services-Elimination of the Manual Card Option for Address Sequencing Services

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-19

    ... Electronic Address Sequencing (EAS) service processes a customer's addresses file for walk sequence and/or... POSTAL SERVICE 39 CFR Part 111 Address Management Services--Elimination of the Manual Card Option for Address Sequencing Services AGENCY: Postal Service TM . ACTION: Proposed rule. SUMMARY: The Postal...

  12. 75 FR 1670 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-12

    ... NYSE Amex Equities Rule 18 (``Compensation in Relation to Exchange System Failure'') to eliminate the..., the Commission's Public Reference Room, the Commission's Web site at http://www.sec.gov , and the... (``Compensation in Relation to Exchange System Failure'') to eliminate the $500 minimum net loss requirement for a...

  13. 76 FR 27118 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-10

    ... elimination of this fee. Consequently, the Exchange proposes to eliminate from its NYSE Amex Equities Price... surveillance for position limit violations, manipulation, front-running, contrary exercise advice violations... doing so shares information and coordinates with other exchanges designed to detect the unlawful use of...

  14. Collision-induced dissociation of diazirine-labeled peptide ions. Evidence for Brønsted-acid assisted elimination of nitrogen.

    PubMed

    Marek, Aleš; Tureček, František

    2014-05-01

    Gas-phase dissociations were investigated for several peptide ions containing the Gly-Leu* N-terminal motif where Leu* was a modified norleucine residue containing the photolabile diazirine ring. Collisional activation of gas-phase peptide cations resulted in facile N₂ elimination that competed with backbone dissociations. A free lysine ammonium group can act as a Brønsted acid to facilitate N₂ elimination. This dissociation was accompanied by insertion of a lysine proton in the side chain of the photoleucine residue, as established by deuterium labeling and gas-phase sequencing of the products. Electron structure calculations were used to provide structures and energies of reactants, intermediates, and transition states for Gly-Leu*-Gly-Gly-Lys amide ions that were combined with RRKM calculations of unimolecular rate constants. The calculations indicated that Brønsted acid-catalyzed eliminations were kinetically preferred over direct loss of N₂ from the diazirine ring. Mechanisms are proposed to explain the proton-initiated reactions and discuss the reaction products. The non-catalyzed diazirine ring cleavage and N₂ loss is proposed as a thermometer dissociation for peptide ion dissociations.

  15. Sensored Field Oriented Control of a Robust Induction Motor Drive Using a Novel Boundary Layer Fuzzy Controller

    PubMed Central

    Saghafinia, Ali; Ping, Hew Wooi; Uddin, Mohammad Nasir

    2013-01-01

    Physical sensors have a key role in implementation of real-time vector control for an induction motor (IM) drive. This paper presents a novel boundary layer fuzzy controller (NBLFC) based on the boundary layer approach for speed control of an indirect field-oriented control (IFOC) of an induction motor (IM) drive using physical sensors. The boundary layer approach leads to a trade-off between control performances and chattering elimination. For the NBLFC, a fuzzy system is used to adjust the boundary layer thickness to improve the tracking performance and eliminate the chattering problem under small uncertainties. Also, to eliminate the chattering under the possibility of large uncertainties, the integral filter is proposed inside the variable boundary layer. In addition, the stability of the system is analyzed through the Lyapunov stability theorem. The proposed NBLFC based IM drive is implemented in real-time using digital signal processor (DSP) board TI TMS320F28335. The experimental and simulation results show the effectiveness of the proposed NBLFC based IM drive at different operating conditions.

  16. A novel concentration and viability detection method for Brettanomyces using the Cellometer image cytometry.

    PubMed

    Martyniak, Brian; Bolton, Jason; Kuksin, Dmitry; Shahin, Suzanne M; Chan, Leo Li-Ying

    2017-01-01

    Brettanomyces spp. can present unique cell morphologies comprised of excessive pseudohyphae and budding, leading to difficulties in enumerating cells. The current cell counting methods include manual counting of methylene blue-stained yeasts or measuring optical densities using a spectrophotometer. However, manual counting can be time-consuming and has high operator-dependent variations due to subjectivity. Optical density measurement can also introduce uncertainties where instead of individual cells counted, an average of a cell population is measured. In contrast, by utilizing the fluorescence capability of an image cytometer to detect acridine orange and propidium iodide viability dyes, individual cell nuclei can be counted directly in the pseudohyphae chains, which can improve the accuracy and efficiency of cell counting, as well as eliminating the subjectivity from manual counting. In this work, two experiments were performed to demonstrate the capability of Cellometer image cytometer to monitor Brettanomyces concentrations, viabilities, and budding/pseudohyphae percentages. First, a yeast propagation experiment was conducted to optimize software counting parameters for monitoring the growth of Brettanomyces clausenii, Brettanomyces bruxellensis, and Brettanomyces lambicus, which showed increasing cell concentrations, and varying pseudohyphae percentages. The pseudohyphae formed during propagation were counted either as multiple nuclei or a single multi-nuclei organism, where the results of counting the yeast as a single multi-nuclei organism were directly compared to manual counting. Second, a yeast fermentation experiment was conducted to demonstrate that the proposed image cytometric analysis method can monitor the growth pattern of B. lambicus and B. clausenii during beer fermentation. The results from both experiments displayed different growth patterns, viability, and budding/pseudohyphae percentages for each Brettanomyces species. The proposed Cellometer image cytometry method can improve efficiency and eliminate operator-dependent variations of cell counting compared with the traditional methods, which can potentially improve the quality of beverage products employing Brettanomyces yeasts.

  17. Finite and spectral cell method for wave propagation in heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Joulaian, Meysam; Duczek, Sascha; Gabbert, Ulrich; Düster, Alexander

    2014-09-01

    In the current paper we present a fast, reliable technique for simulating wave propagation in complex structures made of heterogeneous materials. The proposed approach, the spectral cell method, is a combination of the finite cell method and the spectral element method that significantly lowers preprocessing and computational expenditure. The spectral cell method takes advantage of explicit time-integration schemes coupled with a diagonal mass matrix to reduce the time spent on solving the equation system. By employing a fictitious domain approach, this method also helps to eliminate some of the difficulties associated with mesh generation. Besides introducing a proper, specific mass lumping technique, we also study the performance of the low-order and high-order versions of this approach based on several numerical examples. Our results show that the high-order version of the spectral cell method together requires less memory storage and less CPU time than other possible versions, when combined simultaneously with explicit time-integration algorithms. Moreover, as the implementation of the proposed method in available finite element programs is straightforward, these properties turn the method into a viable tool for practical applications such as structural health monitoring [1-3], quantitative ultrasound applications [4], or the active control of vibrations and noise [5, 6].

  18. Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization

    NASA Astrophysics Data System (ADS)

    Li, Li

    2018-03-01

    In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.

  19. Determination of metals in coal fly ashes using ultrasound-assisted digestion followed by inductively coupled plasma optical emission spectrometry.

    PubMed

    Pontes, Fernanda V M; Mendes, Bruna A de O; de Souza, Evelyn M F; Ferreira, Fernanda N; da Silva, Lílian I D; Carneiro, Manuel C; Monteiro, Maria I C; de Almeida, Marcelo D; Neto, Arnaldo A; Vaitsman, Delmo S

    2010-02-05

    A method for determination of Co, Cr, Cu, Fe, Mn, Ni, Ti, V and Zn in coal fly ash samples using ultrasound-assisted digestion followed by inductively coupled plasma optical emission spectrometry (ICP-OES) is proposed. The digestion procedure consisted in the sonication of the previously dried sample with hydrofluoric acid and aqua regia at 80 degrees C for 30 min, elimination of fluorides by heating until dryness for about 1h and dissolution of the residue with nitric acid solution. A classical digestion method, used as comparative method, consisted in the addition of HCl, HNO(3) and HF to 1 g of sample, and heating on a hot plate until dryness for about 6h. The proposed method presents several advantages: it requires lower amounts of sample and reagents, and it is faster. It is also advantageous when compared to the published methods, which also use ultrasound-assisted digestion procedure: lower detection limits for Co, Cu, Ni, V and Zn, and it does not require shaking during the digestion. The detection limits (microg g(-1)) for Co, Cr, Cu, Fe, Mn, Ni, Ti, V and Zn were 0.06, 0.37, 1.0, 25, 0.93, 0.45, 4.0, 1.7 and 4.3, respectively. The results were in good agreement with those obtained by the classical method and reference values. The exception was Cr, which presented low recoveries in classical and proposed methods (83 and 87%, respectively). Also, the concentration for Cu obtained by the proposed method was significantly different from the reference value, in spite of the good recovery (91+/-1%). Copyright 2009 Elsevier B.V. All rights reserved.

  20. Determination of uronic acids in isolated hemicelluloses from kenaf using diffuse reflectance infrared fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method.

    PubMed

    Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G

    2004-02-01

    Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.

Top