Science.gov

Sample records for algorithm significantly improves

  1. ARPANET Routing Algorithm Improvements

    DTIC Science & Technology

    1978-10-01

    IMPROVEMENTS . .PFOnINI ORG. REPORT MUNDER -- ) _ .. .... 3940 7, AUT񓂏(c) .. .. .. CONTRACT Of GRANT NUMSlet e) SJ. M. /Mc~uillan E. C./Rosen I...8217), this problem may persist for a very long time, causing extremely bad performance throughout the whole network (for instance, if w’ reports that one of...algorithm may naturally tend to oscillate between bad routing paths and become itself a major contributor to network congestion. These examples show

  2. MO-FG-204-03: Using Edge-Preserving Algorithm for Significantly Improved Image-Domain Material Decomposition in Dual Energy CT

    SciTech Connect

    Zhao, W; Niu, T; Xing, L; Xiong, G; Elmore, K; Min, J; Zhu, J; Wang, L

    2015-06-15

    Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leading resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR

  3. Least significant qubit algorithm for quantum images

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Li, Qiong

    2016-11-01

    To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.

  4. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  5. One improved LSB steganography algorithm

    NASA Astrophysics Data System (ADS)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  6. Improved Chaff Solution Algorithm

    DTIC Science & Technology

    2009-03-01

    Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement...0Z4 2. SECURITY CLASSIFICATION (Overall security classification of the document including special warning terms if applicable .) UNCLASSIFIED

  7. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  8. Algorithms for improved performance in cryptographic protocols.

    SciTech Connect

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  9. The GALAD scoring algorithm based on AFP, AFP-L3, and DCP significantly improves detection of BCLC early stage hepatocellular carcinoma.

    PubMed

    Best, J; Bilgi, H; Heider, D; Schotten, C; Manka, P; Bedreli, S; Gorray, M; Ertle, J; van Grunsven, L A; Dechêne, A

    2016-12-01

    Background: Hepatocellular carcinoma (HCC) is one of the leading causes of death in cirrhotic patients worldwide. The detection rate for early stage HCC remains low despite screening programs. Thus, the majority of HCC cases are detected at advanced tumor stages with limited treatment options. To facilitate earlier diagnosis, this study aims to validate the added benefit of the combination of AFP, the novel biomarkers AFP-L3, DCP, and an associated novel diagnostic algorithm called GALAD. Material and methods: Between 2007 and 2008 and from 2010 to 2012, 285 patients newly diagnosed with HCC and 402 control patients suffering from chronic liver disease were enrolled. AFP, AFP-L3, and DCP were measured using the µTASWako i30 automated immunoanalyzer. The diagnostic performance of biomarkers was measured as single parameters and in a logistic regression model. Furthermore, a diagnostic algorithm (GALAD) based on gender, age, and the biomarkers mentioned above was validated. Results: AFP, AFP-L3, and DCP showed comparable sensitivities and specifities for HCC detection. The combination of all biomarkers had the highest sensitivity with decreased specificity. In contrast, utilization of the biomarker-based GALAD score resulted in a superior specificity of 93.3 % and sensitivity of 85.6 %. In the scenario of BCLC 0/A stage HCC, the GALAD algorithm provided the highest overall AUROC with 0.9242, which was superior to any other marker combination. Conclusions: We could demonstrate in our cohort the superior detection of early stage HCC with the combined use of the respective biomarkers and in particular GALAD even in AFP-negative tumors.

  10. Improved Heat-Stress Algorithm

    NASA Technical Reports Server (NTRS)

    Teets, Edward H., Jr.; Fehn, Steven

    2007-01-01

    NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.

  11. Improved Global Ocean Color Using Polymer Algorithm

    NASA Astrophysics Data System (ADS)

    Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques

    2010-12-01

    A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.

  12. Improved LMS algorithm for adaptive beamforming

    NASA Technical Reports Server (NTRS)

    Godara, Lal C.

    1990-01-01

    Two adaptive algorithms which make use of all the available samples to estimate the required gradient are proposed and studied. The first algorithm is referred to as the recursive LMS (least mean squares) and is applicable to a general array. The second algorithm is referred to as the improved LMS algorithm and exploits the Toeplitz structure of the ACM (array correlation matrix); it can be used only for an equispaced linear array.

  13. Improved algorithm for hyperspectral data dimension determination

    NASA Astrophysics Data System (ADS)

    CHEN, Jie; DU, Lei; LI, Jing; HAN, Yachao; GAO, Zihong

    2017-02-01

    The correlation between adjacent bands of hyperspectral image data is relatively strong. However, signal coexists with noise and the HySime (hyperspectral signal identification by minimum error) algorithm which is based on the principle of least squares is designed to calculate the estimated noise value and the estimated signal correlation matrix value. The algorithm is effective with accurate noise value but ineffective with estimated noise value obtained from spectral dimension reduction and de-correlation process. This paper proposes an improved HySime algorithm based on noise whitening process. It carries out the noise whitening, instead of removing noise pixel by pixel, process on the original data first, obtains the noise covariance matrix estimated value accurately, and uses the HySime algorithm to calculate the signal correlation matrix value in order to improve the precision of results. With simulated as well as real data experiments in this paper, results show that: firstly, the improved HySime algorithm are more accurate and stable than the original HySime algorithm; secondly, the improved HySime algorithm results have better consistency under the different conditions compared with the classic noise subspace projection algorithm (NSP); finally, the improved HySime algorithm improves the adaptability of non-white image noise with noise whitening process.

  14. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  15. Improved wavefront reconstruction algorithm from slope measurements

    NASA Astrophysics Data System (ADS)

    Phuc, Phan Huy; Manh, Nguyen The; Rhee, Hyug-Gyo; Ghim, Young-Sik; Yang, Ho-Soon; Lee, Yun-Woo

    2017-03-01

    In this paper, we propose a wavefront reconstruction algorithm from slope measurements based on a zonal method. In this algorithm, the slope measurement sampling geometry used is the Southwell geometry, in which the phase values and the slope data are measured at the same nodes. The proposed algorithm estimates the phase value at a node point using the slope measurements of eight points around the node, as doing so is believed to result in better accuracy with regard to the wavefront. For optimization of the processing time, a successive over-relaxation method is applied to iteration loops. We use a trial-and-error method to determine the best relaxation factor for each type of wavefront in order to optimize the iteration time and, thus, the processing time of the algorithm. Specifically, for a circularly symmetric wavefront, the convergence rate of the algorithm can be improved by using the result of a Fourier Transform as an initial value for the iteration. Various simulations are presented to demonstrate the improvements realized when using the proposed algorithm. Several experimental measurements of deflectometry are also processed by using the proposed algorithm.

  16. Discovering sequence similarity by the algorithmic significance method

    SciTech Connect

    Milosavljevic, A.

    1993-02-01

    The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word ( k-tuple'') length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.

  17. Discovering sequence similarity by the algorithmic significance method

    SciTech Connect

    Milosavljevic, A.

    1993-02-01

    The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word (``k-tuple``) length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.

  18. Improving sparse representation algorithms for maritime video processing

    NASA Astrophysics Data System (ADS)

    Smith, L. N.; Nichols, J. M.; Waterman, J. R.; Olson, C. C.; Judd, K. P.

    2012-06-01

    We present several improvements to published algorithms for sparse image modeling with the goal of improving processing of imagery of small watercraft in littoral environments. The first improvement is to the K-SVD algorithm for training over-complete dictionaries, which are used in sparse representations. It is shown that the training converges significantly faster by incorporating multiple dictionary (i.e., codebook) update stages in each training iteration. The paper also provides several useful and practical lessons learned from our experience with sparse representations. Results of three applications of sparse representation are presented and compared to the state-of-the-art methods; image compression, image denoising, and super-resolution.

  19. Improving Search Algorithms by Using Intelligent Coordinates

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  20. Unsteady transonic algorithm improvements for realistic aircraft applications

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1987-01-01

    Improvements to a time-accurate approximate factorization (AF) algorithm were implemented for steady and unsteady transonic analysis of realistic aircraft configurations. These algorithm improvements were made to the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code developed at the Langley Research Center. The code permits the aeroelastic analysis of complete aircraft in the flutter critical transonic speed range. The AF algorithm of the CAP-TSD code solves the unsteady transonic small-disturbance equation. The algorithm improvements include: an Engquist-Osher (E-O) type-dependent switch to more accurately and efficiently treat regions of supersonic flow; extension of the E-O switch for second-order spatial accuracy in these regions; nonreflecting far field boundary conditions for more accurate unsteady applications; and several modifications which accelerate convergence to steady-state. Calculations are presented for several configurations including the General Dynamics one-ninth scale F-16C aircraft model to evaluate the algorithm modifications. The modifications have significantly improved the stability of the AF algorithm and hence the reliability of the CAP-TSD code in general.

  1. Algorithms for Detecting Significantly Mutated Pathways in Cancer

    NASA Astrophysics Data System (ADS)

    Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.

    Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We

  2. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  3. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  4. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    Thompson, Robert E.

    2001-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Sounding." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  5. HALOE Algorithm Improvements for Upper Tropospheric Soundings

    NASA Technical Reports Server (NTRS)

    Thompson, Robert E.; Douglass, Anne (Technical Monitor)

    2000-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Sounding." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  6. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    Thompson, Robert Earl; McHugh, Martin J.; Gordley, Larry L.; Hervig, Mark E.; Russell, James M., III; Douglass, Anne (Technical Monitor)

    2001-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth Upper Atmospheric Research Satellite (UARS) Science Investigator Program entitled 'HALOE Algorithm Improvements for Upper Tropospheric Sounding.' The goal of this effort is to develop and implement major inversion and processing improvements that will extend Halogen Occultation Experiment (HALOE) measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  7. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    McHugh, Martin J.; Gordley, Larry L.; Russell, James M., III; Hervig, Mark E.

    1999-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Soundings." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first-year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multi-channel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  8. Polynomial Local Improvement Algorithms in Combinatorial Optimization.

    DTIC Science & Technology

    1981-11-01

    NUMBER SOL 81- 21 IIS -J O 15 14. TITLE (am#Su&Utl & YEO RPR ERO OEE Polynomial Local Improvement Algorithms in TcnclRpr Combinatorial Optimization 6...Stanford, CA 94305 II . CONTROLLING OFFICE NAME AND ADDRESS It. REPORT DATE Office of Naval Research - Dept. of the Navy November 1981 800 N. Qu~incy Street...corresponds to a node of the tree. ii ) The father of a vertex is its optimal adjacent vertex; if a vertex is a local optimum, it has no father. The tree is

  9. An improved genetic algorithm with dynamic topology

    NASA Astrophysics Data System (ADS)

    Cai, Kai-Quan; Tang, Yan-Wu; Zhang, Xue-Jun; Guan, Xiang-Min

    2016-12-01

    The genetic algorithm (GA) is a nature-inspired evolutionary algorithm to find optima in search space via the interaction of individuals. Recently, researchers demonstrated that the interaction topology plays an important role in information exchange among individuals of evolutionary algorithm. In this paper, we investigate the effect of different network topologies adopted to represent the interaction structures. It is found that GA with a high-density topology ends up more likely with an unsatisfactory solution, contrarily, a low-density topology can impede convergence. Consequently, we propose an improved GA with dynamic topology, named DT-GA, in which the topology structure varies dynamically along with the fitness evolution. Several experiments executed with 15 well-known test functions have illustrated that DT-GA outperforms other test GAs for making a balance of convergence speed and optimum quality. Our work may have implications in the combination of complex networks and computational intelligence. Project supported by the National Natural Science Foundation for Young Scientists of China (Grant No. 61401011), the National Key Technologies R & D Program of China (Grant No. 2015BAG15B01), and the National Natural Science Foundation of China (Grant No. U1533119).

  10. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    algorithms by selecting ranges of the argument omega in which the performance is the fastest. Reasons for the new version: Some of the theoretical models describing electron transport in condensed matter need a source of the Chandrasekhar H function values with an accuracy of at least 10 decimal places. Additionally, calculations of this function should be as fast as possible since frequent calls to a subroutine providing this function are made (e.g., numerical evaluation of a double integral with a complicated integrand containing the H function). Both conditions were satisfied in the algorithm previously published [1]. However, it has been found that a proper selection of the quadrature in an integral representation of the Chandrasekhar function may considerably decrease the running time. By suitable selection of the number of abscissas in Gauss-Legendre quadrature, the execution time was decreased by a factor of more than 20. Simultaneously, the accuracy of results has not been affected. Summary of revisions: (1) As in previous work [1], two integral representations of the Chandrasekhar function, H(x,omega), were considered: the expression published by Dudarev and Whelan [2] and the expression published by Davidović et al. [3]. The algorithms implementing these representations were designated A and B, respectively. All integrals in these implementations were previously calculated using Romberg quadrature. It has been found, however, that the use of Gauss-Legendre quadrature considerably improved the performance of both algorithms. Two conditions have to be satisfied. (i) The number of abscissas, N, has to be rather large, and (ii) the abscissas and corresponding weights should be determined with accuracy as high as possible. The abscissas and weights are available for N=16, 20, 24, 32, 40, 48, 64, 80, and 96 with accuracy of 20 decimal places [4], and all these values were introduced into a new procedure GAUSS replacing procedure ROMBERG. Due to the fact that the

  11. Improving Algorithm for Automatic Spectra Processing

    NASA Astrophysics Data System (ADS)

    Rackovic, K.; Nikolic, S.; Kotrc, P.

    2009-09-01

    Testing and improving of the computer program for automatic processing (flat-fielding) of a great number of solar spectra obtained with the horizontal heliospectrograph HSFA2 has been done. This program was developed in the Astronomical Institute of Academy of Sciences of the Czech Republic in Ondřejov. An irregularity in its work has been discovered, i.e. the program didn't work for some of the spectra. To discover a cause of this error an algorithm has been developed, and a program for examination of the parallelism of reference hairs crossing the spectral slit on records of solar spectra has been made. The standard methods for data processing have been applied-calculating and analyzing higher-order moments of distribution of radiation intensity. The spectra with the disturbed parallelism of the reference hairs have been eliminated from further processing. In order to improve this algorithm of smoothing of spectra, isolation and removal of the harmonic made by a sunspot with multiple elementary transformations of ordinates (Labrouste's transformations) are planned. This project was accomplished at the first summer astronomy practice of students of the Faculty of Mathematics, University of Belgrade, Serbia in 2007 in Ondřejov.

  12. Improved Algorithms Speed It Up for Codes

    SciTech Connect

    Hazi, A

    2005-09-20

    Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leader for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.

  13. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  14. High-speed scanning: an improved algorithm

    NASA Astrophysics Data System (ADS)

    Nachimuthu, A.; Hoang, Khoi

    1995-10-01

    In using machine vision for assessing an object's surface quality, many images are required to be processed in order to separate the good areas from the defective ones. Examples can be found in the leather hide grading process; in the inspection of garments/canvas on the production line; in the nesting of irregular shapes into a given surface... . The most common method of subtracting the total area from the sum of defective areas does not give an acceptable indication of how much of the `good' area can be used, particularly if the findings are to be used for the nesting of irregular shapes. This paper presents an image scanning technique which enables the estimation of useable areas within an inspected surface in terms of the user's definition, not the supplier's claims. That is, how much useable area the user can use, not the total good area as the supplier estimated. An important application of the developed technique is in the leather industry where the tanner (the supplier) and the footwear manufacturer (the user) are constantly locked in argument due to disputed quality standards of finished leather hide, which disrupts production schedules and wasted costs in re-grading, re- sorting... . The developed basic algorithm for area scanning of a digital image will be presented. The implementation of an improved scanning algorithm will be discussed in detail. The improved features include Boolean OR operations and many other innovative functions which aim at optimizing the scanning process in terms of computing time and the accurate estimation of useable areas.

  15. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  16. Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula

    2012-01-01

    AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,

  17. RESEARCH NOTE An improved leap-frog rotational algorithm

    NASA Astrophysics Data System (ADS)

    Svanberg, Marcus

    A new implicit leap-frog algorithm for the integration of rigid body rotational motion is presented. Orientations are represented by quaternions and the algorithm is compared with three existing leap-frog integrators, by solving the classical equations of motion for a (H O) cluster. We find that the present scheme exhibits superior energy conservation properties, especially for integration times of about 10 ps or longer. Contrary to previous algorithms, the present one behaves as a true Verlet integrator, where the degree of energy conservation is independent of the length of the trajectory. The method is similar to the implicit scheme proposed by D. Fincham (1992, Molec. Simulation, 8, 165), with the difference that selfconsistent quaternions, as well as their time derivatives, are obtained by iteration at the mid-timestep instead of after the complete timestep. A slight modification of either the explicit or the implicit leap-frog rotational algorithm in existing molecular dynamics programs may thus lead to significant improvements of energy conservation, as long as this property is not dominated by other sources such as errors due to potential truncation. It is demonstrated that the present algorithm can be used with timesteps as large as 4 fs in water simulations, and still produce stable trajectories of 10 ns duration. 2 20

  18. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  19. Improved CHAID algorithm for document structure modelling

    NASA Astrophysics Data System (ADS)

    Belaïd, A.; Moinel, T.; Rangoni, Y.

    2010-01-01

    This paper proposes a technique for the logical labelling of document images. It makes use of a decision-tree based approach to learn and then recognise the logical elements of a page. A state-of-the-art OCR gives the physical features needed by the system. Each block of text is extracted during the layout analysis and raw physical features are collected and stored in the ALTO format. The data-mining method employed here is the "Improved CHi-squared Automatic Interaction Detection" (I-CHAID). The contribution of this work is the insertion of logical rules extracted from the logical layout knowledge to support the decision tree. Two setups have been tested; the first uses one tree per logical element, the second one uses a single tree for all the logical elements we want to recognise. The main system, implemented in Java, coordinates the third-party tools (Omnipage for the OCR part, and SIPINA for the I-CHAID algorithm) using XML and XSL transforms. It was tested on around 1000 documents belonging to the ICPR'04 and ICPR'08 conference proceedings, representing about 16,000 blocks. The final error rate for determining the logical labels (among 9 different ones) is less than 6%.

  20. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Development of Improved Algorithms and Multiscale...a wide range of scales through use of accurate numerical methods and high- performance computational algorithms . The tool will be applied to study...dissipation. OBJECTIVES The primary objective is to enhance the capabilities of the SUNTANS model through development of algorithms to study

  1. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  2. Polarization image fusion algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Zhang, Siyuan; Yuan, Yan; Su, Lijuan; Hu, Liang; Liu, Hui

    2013-12-01

    The polarization detection technique provides polarization information of objects which conventional detection techniques are unable to obtain. In order to fully utilize of obtained polarization information, various polarization imagery fusion algorithms have been developed. In this research, we proposed a polarization image fusion algorithm based on the improved pulse coupled neural network (PCNN). The improved PCNN algorithm uses polarization parameter images to generate the fused polarization image with object details for polarization information analysis and uses the matching degree M as the fusion rule. The improved PCNN fused image is compared with fused images based on Laplacian pyramid (LP) algorithm, Wavelet algorithm and PCNN algorithm. Several performance indicators are introduced to evaluate the fused images. The comparison showed the presented algorithm yields image with much higher quality and preserves more detail information of the objects.

  3. Algorithm Improvement Program Nuclide Identification Algorithm Scoring Criteria And Scoring Application - DNDO.

    SciTech Connect

    Enghauser, Michael

    2015-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  4. Improved ant colony algorithm and its simulation study

    NASA Astrophysics Data System (ADS)

    Wang, Zongjiang

    2013-03-01

    Ant colony algorithm is development a new heuristic algorithm through simulation ant foraging. For its convergence rate slow, easy to fall into local optimal solution proposed for the adjustment of key parameters, pheromone update to improve the way and through the issue of TSP experiments, results showed that the improved algorithm has better overall search capabilities and demonstrated the feasibility and effectiveness of this method.

  5. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  6. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  7. An improved Apriori algorithm for mining association rules

    NASA Astrophysics Data System (ADS)

    Yuan, Xiuli

    2017-03-01

    Among mining algorithms based on association rules, Apriori technique, mining frequent itermsets and interesting associations in transaction database, is not only the first used association rule mining technique but also the most popular one. After studying, it is found out that the traditional Apriori algorithms have two major bottlenecks: scanning the database frequently; generating a large number of candidate sets. Based on the inherent defects of Apriori algorithm, some related improvements are carried out: 1) using new database mapping way to avoid scanning the database repeatedly; 2) further pruning frequent itemsets and candidate itemsets in order to improve joining efficiency; 3) using overlap strategy to count support to achieve high efficiency. Under the same conditions, the results illustrate that the proposed improved Apriori algorithm improves the operating efficiency compared with other improved algorithms.

  8. An improved NAS-RIF algorithm for blind image restoration

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Jiang, Yanbin; Lou, Shuntian

    2007-01-01

    Image restoration is widely applied in many areas, but when operating on images with different scales for the representation of pixel intensity levels or low SNR, the traditional restoration algorithm lacks validity and induces noise amplification, ringing artifacts and poor convergent ability. In this paper, an improved NAS-RIF algorithm is proposed to overcome the shortcomings of the traditional algorithm. The improved algorithm proposes a new cost function which adds a space-adaptive regularization term and a disunity gain of the adaptive filter. In determining the support region, a pre-segmentation is used to form it close to the object in the image. Compared with the traditional algorithm, simulations show that the improved algorithm behaves better convergence, noise resistance and provides a better estimate of original image.

  9. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS.

    PubMed

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-10-05

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS.

  10. Reconstruction algorithm for improved ultrasound image quality.

    PubMed

    Madore, Bruno; Meral, F Can

    2012-02-01

    A new algorithm is proposed for reconstructing raw RF data into ultrasound images. Previous delay-and-sum beamforming reconstruction algorithms are essentially one-dimensional, because a sum is performed across all receiving elements. In contrast, the present approach is two-dimensional, potentially allowing any time point from any receiving element to contribute to any pixel location. Computer-intensive matrix inversions are performed once, in advance, to create a reconstruction matrix that can be reused indefinitely for a given probe and imaging geometry. Individual images are generated through a single matrix multiplication with the raw RF data, without any need for separate envelope detection or gridding steps. Raw RF data sets were acquired using a commercially available digital ultrasound engine for three imaging geometries: a 64-element array with a rectangular field-of- view (FOV), the same probe with a sector-shaped FOV, and a 128-element array with rectangular FOV. The acquired data were reconstructed using our proposed method and a delay- and-sum beamforming algorithm for comparison purposes. Point spread function (PSF) measurements from metal wires in a water bath showed that the proposed method was able to reduce the size of the PSF and its spatial integral by about 20 to 38%. Images from a commercially available quality-assurance phantom had greater spatial resolution and contrast when reconstructed with the proposed approach.

  11. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  12. An Improved Ant Algorithm for Grid Task Scheduling Strategy

    NASA Astrophysics Data System (ADS)

    Wei, Laizhi; Zhang, Xiaobin; Li, Yun; Li, Yujie

    Task scheduling is an important factor that directly influences the performance and efficiency of the system. Grid resources are usually distributed in different geographic locations, belonging to different organizations and resources' properties are vastly different, in order to complete efficiently, intelligently task scheduling, the choice of scheduling strategy is essential. This paper proposes an improved ant algorithm for grid task scheduling strategy, by introducing a new type pheromone and a new node redistribution selection rule. On the one hand, the algorithm can track performances of resources and tag it. On the other hand, add algorithm to deal with task scheduling unsuccessful situations that improve the algorithm's robustness and the successful probability of task allocation and reduce unnecessary overhead of system, shortening the total time to complete tasks. The data obtained from simulation experiment shows that use this algorithm to resolve schedule problem better than traditional ant algorithm.

  13. Optimization and Improvement of FOA Corner Cube Algorithm

    SciTech Connect

    McClay, W A; Awwal, A S; Burkhart, S C; Candy, J V

    2004-10-01

    Alignment of laser beams based on video images is a crucial task necessary to automate operation of the 192 beams at the National Ignition Facility (NIF). The final optics assembly (FOA) is the optical element that aligns the beam into the target chamber. This work presents an algorithm for determining the position of a corner cube alignment image in the final optics assembly. The improved algorithm was compared to the existing FOA algorithm on 900 noise-simulated images. While the existing FOA algorithm based on correlation with a synthetic template has a radial standard deviation of 1 pixel, the new algorithm based on classical matched filtering (CMF) and polynomial fit to the correlation peak improves the radial standard deviation performance to less than 0.3 pixels. In the new algorithm the templates are designed from real data stored during a year of actual operation.

  14. Inhaler Reminders Significantly Improve Asthma Patients' Use of Controller Medications

    MedlinePlus

    ... of controller medications Share | Inhaler reminders significantly improve asthma patients’ use of controller medications Published Online: July ... effective in reducing the burden and risk of asthma, but many patients do not use them regularly. ...

  15. An Improved DINEOF Algorithm for Filling Missing Values in Spatio-Temporal Sea Surface Temperature Data

    PubMed Central

    Ping, Bo; Su, Fenzhen; Meng, Yunshan

    2016-01-01

    In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time. PMID:27195692

  16. An improved simulated annealing algorithm for standard cell placement

    NASA Technical Reports Server (NTRS)

    Jones, Mark; Banerjee, Prithviraj

    1988-01-01

    Simulated annealing is a general purpose Monte Carlo optimization technique that was applied to the problem of placing standard logic cells in a VLSI ship so that the total interconnection wire length is minimized. An improved standard cell placement algorithm that takes advantage of the performance enhancements that appear to come from parallelizing the uniprocessor simulated annealing algorithm is presented. An outline of this algorithm is given.

  17. An improved sink particle algorithm for SPH simulations

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Walch, S.; Whitworth, A. P.

    2013-04-01

    Numerical simulations of star formation frequently rely on the implementation of sink particles: (a) to avoid expending computational resource on the detailed internal physics of individual collapsing protostars, (b) to derive mass functions, binary statistics and clustering kinematics (and hence to make comparisons with observation), and (c) to model radiative and mechanical feedback; sink particles are also used in other contexts, for example to represent accreting black holes in galactic nuclei. We present a new algorithm for creating and evolving sink particles in smoothed particle hydrodynamic (SPH) simulations, which appears to represent a significant improvement over existing algorithms - particularly in situations where sinks are introduced after the gas has become optically thick to its own cooling radiation and started to heat up by adiabatic compression. (i) It avoids spurious creation of sinks. (ii) It regulates the accretion of matter on to a sink so as to mitigate non-physical perturbations in the vicinity of the sink. (iii) Sinks accrete matter, but the associated angular momentum is transferred back to the surrounding medium. With the new algorithm - and modulo the need to invoke sufficient resolution to capture the physics preceding sink formation - the properties of sinks formed in simulations are essentially independent of the user-defined parameters of sink creation, or the number of SPH particles used.

  18. Improving CMD Areal Density Analysis: Algorithms and Strategies

    NASA Astrophysics Data System (ADS)

    Wilson, R. E.

    2014-06-01

    Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD¡¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.

  19. An Improved Neutron Transport Algorithm for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.

    2010-01-01

    Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.

  20. Multi-expert tracking algorithm based on improved compressive tracker

    NASA Astrophysics Data System (ADS)

    Feng, Yachun; Zhang, Hong; Yuan, Ding

    2015-12-01

    Object tracking is a challenging task in computer vision. Most state-of-the-art methods maintain an object model and update the object model by using new examples obtained incoming frames in order to deal with the variation in the appearance. It will inevitably introduce the model drift problem into the object model updating frame-by-frame without any censorship mechanism. In this paper, we adopt a multi-expert tracking framework, which is able to correct the effect of bad updates after they happened such as the bad updates caused by the severe occlusion. Hence, the proposed framework exactly has the ability which a robust tracking method should process. The expert ensemble is constructed of a base tracker and its formal snapshot. The tracking result is produced by the current tracker that is selected by means of a simple loss function. We adopt an improved compressive tracker as the base tracker in our work and modify it to fit the multi-expert framework. The proposed multi-expert tracking algorithm significantly improves the robustness of the base tracker, especially in the scenes with frequent occlusions and illumination variations. Experiments on challenging video sequences with comparisons to several state-of-the-art trackers demonstrate the effectiveness of our method and our tracking algorithm can run at real-time.

  1. Improvement and implementation for Canny edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  2. An improved dehazing algorithm of aerial high-definition image

    NASA Astrophysics Data System (ADS)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  3. An improved corner detection algorithm for image sequence

    NASA Astrophysics Data System (ADS)

    Yan, Minqi; Zhang, Bianlian; Guo, Min; Tian, Guangyuan; Liu, Feng; Huo, Zeng

    2014-11-01

    A SUSAN corner detection algorithm for a sequence of images is proposed in this paper, The correlation matching algorithm is treated for the coarse positioning of the detection area, after that, SUSAN corner detection is used to obtain interesting points of the target. The SUSAN corner detection has been improved. For the situation that the points of a small area are often detected as corner points incorrectly, the neighbor direction filter is applied to reduce the rate of mistakes. Experiment results show that the algorithm enhances the anti-noise performance, improve the accuracy of detection.

  4. An improved HMM/SVM dynamic hand gesture recognition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Yao, Yuanyuan; Luo, Yuan

    2015-10-01

    In order to improve the recognition rate and stability of dynamic hand gesture recognition, for the low accuracy rate of the classical HMM algorithm in train the B parameter, this paper proposed an improved HMM/SVM dynamic gesture recognition algorithm. In the calculation of the B parameter of HMM model, this paper introduced the SVM algorithm which has the strong ability of classification. Through the sigmoid function converted the state output of the SVM into the probability and treat this probability as the observation state transition probability of the HMM model. After this, it optimized the B parameter of HMM model and improved the recognition rate of the system. At the same time, it also enhanced the accuracy and the real-time performance of the human-computer interaction. Experiments show that this algorithm has a strong robustness under the complex background environment and the varying illumination environment. The average recognition rate increased from 86.4% to 97.55%.

  5. Improved interpretation of satellite altimeter data using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Messa, Kenneth; Lybanon, Matthew

    1992-01-01

    Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.

  6. Improved ant colony algorithm for global path planning

    NASA Astrophysics Data System (ADS)

    Li, Pengfei; Wang, Hongbo; Li, Xiaogang

    2017-03-01

    The ant colony algorithm has many advantages compared with other algorithms in path planning, but its shortcomings still cannot be ignored. For example, the convergence speed is very low at initial stage, it is easy to fall into the local optimal solution, and the solution speed is slow and so on. In order to solve these problems and reduce the search time, this paper firstly makes the assignment of the main parameters of α, β, M and ρ in the ant colony algorithm through a large number of experimental data analysis. Then an improved ant colony algorithm based on dynamic parameters and new pheromone updating mechanism is proposed in this paper. Simulation results show that the improved ant colony algorithm can not only greatly shorten the algorithm running time, but also has greater probability to get the global optimal solution, and the convergence rate of algorithm is better than traditional ant colony algorithm. It is very advantageous for solving large-scale optimization problems.

  7. An improved harmony search algorithm with dynamically varying bandwidth

    NASA Astrophysics Data System (ADS)

    Kalivarapu, J.; Jain, S.; Bag, S.

    2016-07-01

    The present work demonstrates a new variant of the harmony search (HS) algorithm where bandwidth (BW) is one of the deciding factors for the time complexity and the performance of the algorithm. The BW needs to have both explorative and exploitative characteristics. The ideology is to use a large BW to search in the full domain and to adjust the BW dynamically closer to the optimal solution. After trying a series of approaches, a methodology inspired by the functioning of a low-pass filter showed satisfactory results. This approach was implemented in the self-adaptive improved harmony search (SIHS) algorithm and tested on several benchmark functions. Compared to the existing HS algorithm and its variants, SIHS showed better performance on most of the test functions. Thereafter, the algorithm was applied to geometric parameter optimization of a friction stir welding tool.

  8. Automatic coronary lumen segmentation with partial volume modeling improves lesions' hemodynamic significance assessment

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Lamash, Y.; Gilboa, G.; Nickisch, H.; Prevrhal, S.; Schmitt, H.; Vembar, M.; Goshen, L.

    2016-03-01

    The determination of hemodynamic significance of coronary artery lesions from cardiac computed tomography angiography (CCTA) based on blood flow simulations has the potential to improve CCTA's specificity, thus resulting in improved clinical decision making. Accurate coronary lumen segmentation required for flow simulation is challenging due to several factors. Specifically, the partial-volume effect (PVE) in small-diameter lumina may result in overestimation of the lumen diameter that can lead to an erroneous hemodynamic significance assessment. In this work, we present a coronary artery segmentation algorithm tailored specifically for flow simulations by accounting for the PVE. Our algorithm detects lumen regions that may be subject to the PVE by analyzing the intensity values along the coronary centerline and integrates this information into a machine-learning based graph min-cut segmentation framework to obtain accurate coronary lumen segmentations. We demonstrate the improvement in hemodynamic significance assessment achieved by accounting for the PVE in the automatic segmentation of 91 coronary artery lesions from 85 patients. We compare hemodynamic significance assessments by means of fractional flow reserve (FFR) resulting from simulations on 3D models generated by our segmentation algorithm with and without accounting for the PVE. By accounting for the PVE we improved the area under the ROC curve for detecting hemodynamically significant CAD by 29% (N=91, 0.85 vs. 0.66, p<0.05, Delong's test) with invasive FFR threshold of 0.8 as the reference standard. Our algorithm has the potential to facilitate non-invasive hemodynamic significance assessment of coronary lesions.

  9. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  10. Improved ant algorithms for software testing cases generation.

    PubMed

    Yang, Shunkun; Man, Tianlong; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to produce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations.

  11. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  12. Improving night sky star image processing algorithm for star sensors.

    PubMed

    Arbabmir, Mohammad Vali; Mohammadi, Seyyed Mohammad; Salahshour, Sadegh; Somayehee, Farshad

    2014-04-01

    In this paper, the night sky star image processing algorithm, consisting of image preprocessing, star pattern recognition, and centroiding steps, is improved. It is shown that the proposed noise reduction approach can preserve more necessary information than other frequently used approaches. It is also shown that the proposed thresholding method unlike commonly used techniques can properly perform image binarization, especially in images with uneven illumination. Moreover, the higher performance rate and lower average centroiding estimation error of near 0.045 for 400 simulated images compared to other algorithms show the high capability of the proposed night sky star image processing algorithm.

  13. Economic load dispatch using improved gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Wang, Jia-rong; Guo, Feng

    2016-03-01

    This paper presents an improved gravitational search algorithm(IGSA) to solve the economic load dispatch(ELD) problem. In order to avoid the local optimum phenomenon, mutation processing is applied to the GSA. The IGSA is applied to solve the economic load dispatch problems with the valve point effects, which has 13 generators and a load demand of 2520 MW. Calculation results show that the algorithm in this paper can deal with the ELD problems with high stability.

  14. Visualizing and improving the robustness of phase retrieval algorithms

    DOE PAGES

    Tripathi, Ashish; Leyffer, Sven; Munson, Todd; ...

    2015-06-01

    Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.

  15. An improved direction finding algorithm based on Toeplitz approximation.

    PubMed

    Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao

    2013-01-07

    In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.

  16. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  17. An Improved Physarum polycephalum Algorithm for the Shortest Path Problem

    PubMed Central

    Wang, Qing; Adamatzky, Andrew; Chan, Felix T. S.; Mahadevan, Sankaran

    2014-01-01

    Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960

  18. Improved Ant Colony Clustering Algorithm and Its Performance Study.

    PubMed

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering.

  19. An Improved Passive Phase Conjugation Array Communication Algorithm

    NASA Astrophysics Data System (ADS)

    Jia, Ning; Guo, Zhongyuan; Huang, Jianchun; Chen, Geng

    2010-09-01

    The time-varying, dispersive, multipath underwater acoustic channel is a challenging environment for reliable coherent communications. A method proposed recently to cope with intersymbol interference (ISI) is Passive-Phase-Conjugation (PPC) cascaded with Decision-Feedback Equalization (DFE). Based on the theory of signal propagation in a waveguide, PPC can mitigate channel fading and improve the signal-to-noise ratio (SNR) by using a receiver array. At the same time the residual ISI will be removed by DFE. This method will lead to explosive divergence when the channel is changed by a large amount, because PPC estimates channels inaccurately. An improved algorithm is introduced in this paper to estimate the channel during all the communication process; as a result, the change of the channel can be found in time and the PPC could use more accurate channel estimated. Using simulated and at-sea data, we demonstrate that this algorithm can improve the stability of original algorithm in changed channels.

  20. The Evaluation of an improved AMSU rain rate algorithm

    NASA Astrophysics Data System (ADS)

    Qiu, S.; Pellegrino, P.; Ferraro, R.; Zhao, L.

    2003-12-01

    Improvements have been made to the rain rate retrieval from the Advanced Microwave Sounding Units (AMSU). The new features of the improved rain rate algorithm include the two-stream corrections on the satellite brightness temperatures, cloud and rain type classification and removal of the two ad hoc thresholds in the ice water path (IWP) and effective diameter (De) retrieval where the scattering signals are very small. A monthly mean comparison has been made between the improved algorithm and the current NOAA operational algorithm. In addition, comparison with monthly mean rainfall derived from SSMI, TRMM, and GPCP are also conducted in the evaluation. These comparisons indicate that the new algorithm greatly reduces the previous positive bias over ocean, while increases rainfall intensity and picks up more light rain over land. Also, Pacific Atolls rain gauges are used to demonstrate the greatly improved rain rate retrieval over the tropical Pacific ocean. Results of a winter time case study over California from February 2003 further confirm the enhanced ability of the new algorithm in identifying both light and heavy rain over land.

  1. An improved label propagation algorithm using average node energy in complex networks

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Zhao, Dandan; Li, Lin; Lu, Jianfeng; Han, Jianmin; Wu, Songyang

    2016-10-01

    Detecting overlapping community structure can give a significant insight into structural and functional properties in complex networks. In this Letter, we propose an improved label propagation algorithm (LPA) to uncover overlapping community structure. After mapping nodes into random variables, the algorithm calculates variance of each node and the proposed average node energy. The nodes whose variances are less than a tunable threshold are regarded as bridge nodes and meanwhile changing the given threshold can uncover some latent bridge node. Simulation results in real-world and artificial networks show that the improved algorithm is efficient in revealing overlapping community structures.

  2. An Improved Force Feedback Control Algorithm for Active Tendons

    PubMed Central

    Guo, Tieneng; Liu, Zhifeng; Cai, Ligang

    2012-01-01

    An active tendon, consisting of a displacement actuator and a co-located force sensor, has been adopted by many studies to suppress the vibration of large space flexible structures. The damping, provided by the force feedback control algorithm in these studies, is small and can increase, especially for tendons with low axial stiffness. This study introduces an improved force feedback algorithm, which is based on the idea of velocity feedback. The algorithm provides a large damping ratio for space flexible structures and does not require a structure model. The effectiveness of the algorithm is demonstrated on a structure similar to JPL-MPI. The results show that large damping can be achieved for the vibration control of large space structures. PMID:23112660

  3. An improved back projection algorithm of ultrasound tomography

    SciTech Connect

    Xiaozhen, Chen; Mingxu, Su; Xiaoshu, Cai

    2014-04-11

    Binary logic back projection algorithm is improved in this work for the development of fast ultrasound tomography system with a better effect of image reconstruction. The new algorithm is characterized by an extra logical value ‘2’ and dual-threshold processing of collected raw data. To compare with the original algorithm, a numerical simulation was conducted by the verification of COMSOL simulations formerly, and then a set of ultrasonic tomography system is established to perform the experiments of one, two and three cylindrical objects. The object images are reconstructed through the inversion of signals matrix acquired by the transducer array after a preconditioning, while the corresponding spatial imaging errors can obviously indicate that the improved back projection method can achieve modified inversion effect.

  4. An Improved Algorithm for Retrieving Surface Downwelling Longwave Radiation from Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.

    2007-01-01

    Zhou and Cess [2001] developed an algorithm for retrieving surface downwelling longwave radiation (SDLW) based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for scenes that were covered with ice clouds. An improved version of the algorithm prevents the large errors in the SDLW at low water vapor amounts by taking into account that under such conditions the SDLW and water vapor amount are nearly linear in their relationship. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths available from the Cloud and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) product to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing and will be incorporated as one of the CERES empirical surface radiation algorithms.

  5. An Improved Wind Speed Retrieval Algorithm For The CYGNSS Mission

    NASA Astrophysics Data System (ADS)

    Ruf, C. S.; Clarizia, M. P.

    2015-12-01

    The NASA spaceborne Cyclone Global Navigation Satellite System (CYGNSS) mission is a constellation of 8 microsatellites focused on tropical cyclone (TC) inner core process studies. CYGNSS will be launched in October 2016, and will use GPS-Reflectometry (GPS-R) to measure ocean surface wind speed in all precipitating conditions, and with sufficient frequency to resolve genesis and rapid intensification. Here we present a modified and improved version of the current baseline Level 2 (L2) wind speed retrieval algorithm designed for CYGNSS. An overview of the current approach is first presented, which makes use of two different observables computed from 1-second Level 1b (L1b) delay-Doppler Maps (DDMs) of radar cross section. The first observable, the Delay-Doppler Map Average (DDMA), is the averaged radar cross section over a delay-Doppler window around the DDM peak (i.e. the specular reflection point coordinate in delay and Doppler). The second, the Leading Edge Slope (LES), is the leading edge of the Integrated Delay Waveform (IDW), obtained by integrating the DDM along the Doppler dimension. The observables are calculated over a limited range of time delays and Doppler frequencies to comply with baseline spatial resolution requirements for the retrieved winds, which in the case of CYGNSS is 25 km. In the current approach, the relationship between the observable value and the surface winds is described by an empirical Geophysical Model Function (GMF) that is characterized by a very high slope in the high wind regime, for both DDMA and LES observables, causing large errors in the retrieval at high winds. A simple mathematical modification of these observables is proposed, which linearizes the relationship between ocean surface roughness and the observables. This significantly reduces the non-linearity present in the GMF that relate the observables to the wind speed, and reduces the root-mean square error between true and retrieved winds, particularly in the high wind

  6. Implementation of an institution-wide acute stroke algorithm: Improving stroke quality metrics

    PubMed Central

    Zuckerman, Scott L.; Magarik, Jordan A.; Espaillat, Kiersten B.; Kumar, Nishant Ganesh; Bhatia, Ritwik; Dewan, Michael C.; Morone, Peter J.; Hermann, Lisa D.; O’Duffy, Anne E.; Riebau, Derek A.; Kirshner, Howard S.; Mocco, J.

    2016-01-01

    Background: In May 2012, an updated stroke algorithm was implemented at Vanderbilt University Medical Center. The current study objectives were to: (1) describe the process of implementing a new stroke algorithm and (2) compare pre- and post-algorithm quality improvement (QI) metrics, specificaly door to computed tomography time (DTCT), door to neurology time (DTN), and door to tPA administration time (DTT). Methods: Our institutional stroke algorithm underwent extensive revision, with a focus on removing variability, streamlining care, and improving time delays. The updated stroke algorithm was implemented in May 2012. Three primary stroke QI metrics were evaluated over four separate 3-month time points, one pre- and three post-algorithm periods. Results: The following data points improved after algorithm implementation: average DTCT decreased from 39.9 to 12.8 min (P < 0.001); average DTN decreased from 34.1 to 8.2 min (P ≤ 0.001), and average DTT decreased from 62.5 to 43.5 min (P = 0.17). Conclusion: A new stroke protocol that prioritized neurointervention at our institution resulted in significant lowering in the DTCT and DTN, with a nonsignificant improvement in DTT. PMID:28144480

  7. Web multimedia information retrieval using improved Bayesian algorithm.

    PubMed

    Yu, Yi-Jun; Chen, Chun; Yu, Yi-Min; Lin, Huai-Zhong

    2003-01-01

    The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia information retrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.

  8. Warfarin improves neuropathy in monoclonal gammopathy of undetermined significance.

    PubMed

    Henry Gomez, Teny; Holkova, Beata; Noreika, Danielle; Del Fabbro, Egidio

    2016-06-17

    We report a case of a 60-year-old man who was referred to a palliative care clinic with monoclonal gammopathy of undetermined significance (MGUS)-associated neuropathy, responding to a therapeutic trial of warfarin. Electromyography showed distal symmetric sensory axonal neuropathy. The patient reported having had improvement of his neuropathic symptoms while taking warfarin postoperatively for thromboprophylaxis 1 year prior, and recurrence of his symptoms after the warfarin was discontinued. The patient was rechallenged with a trial of warfarin, targeting an international normalised ratio of 1.5-2.0. His pain scores decreased from 5/10 to 3/10 at 1 month and symptom improvement was maintained through 24 months of follow-up. Warfarin had a remarkable impact on our patient's symptoms and quality of life. The mechanisms mediating the symptomatic benefit with warfarin are unclear; however, a placebo effect is unlikely. Further studies may help guide the use of warfarin for MGUS-associated neuropathy.

  9. Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm

    NASA Technical Reports Server (NTRS)

    Riggs, George; Hall, Dorothy K.

    2012-01-01

    The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).

  10. An improved particle swarm optimization algorithm for reliability problems.

    PubMed

    Wu, Peifeng; Gao, Liqun; Zou, Dexuan; Li, Steven

    2011-01-01

    An improved particle swarm optimization (IPSO) algorithm is proposed to solve reliability problems in this paper. The IPSO designs two position updating strategies: In the early iterations, each particle flies and searches according to its own best experience with a large probability; in the late iterations, each particle flies and searches according to the fling experience of the most successful particle with a large probability. In addition, the IPSO introduces a mutation operator after position updating, which can not only prevent the IPSO from trapping into the local optimum, but also enhances its space developing ability. Experimental results show that the proposed algorithm has stronger convergence and stability than the other four particle swarm optimization algorithms on solving reliability problems, and that the solutions obtained by the IPSO are better than the previously reported best-known solutions in the recent literature.

  11. Improving the MODIS Global Snow-Mapping Algorithm

    NASA Technical Reports Server (NTRS)

    Klein, Andrew G.; Hall, Dorothy K.; Riggs, George A.

    1997-01-01

    An algorithm (Snowmap) is under development to produce global snow maps at 500 meter resolution on a daily basis using data from the NASA MODIS instrument. MODIS, the Moderate Resolution Imaging Spectroradiometer, will be launched as part of the first Earth Observing System (EOS) platform in 1998. Snowmap is a fully automated, computationally frugal algorithm that will be ready to implement at launch. Forests represent a major limitation to the global mapping of snow cover as a forest canopy both obscures and shadows the snow underneath. Landsat Thematic Mapper (TM) and MODIS Airborne Simulator (MAS) data are used to investigate the changes in reflectance that occur as a forest stand becomes snow covered and to propose changes to the Snowmap algorithm that will improve snow classification accuracy forested areas.

  12. An improved FCM medical image segmentation algorithm based on MMTD.

    PubMed

    Zhou, Ningning; Yang, Tingting; Zhang, Shaobai

    2014-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD) and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  13. Improved Gravitation Field Algorithm and Its Application in Hierarchical Clustering

    PubMed Central

    Zheng, Ming; Sun, Ying; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Background Gravitation field algorithm (GFA) is a new optimization algorithm which is based on an imitation of natural phenomena. GFA can do well both for searching global minimum and multi-minima in computational biology. But GFA needs to be improved for increasing efficiency, and modified for applying to some discrete data problems in system biology. Method An improved GFA called IGFA was proposed in this paper. Two parts were improved in IGFA. The first one is the rule of random division, which is a reasonable strategy and makes running time shorter. The other one is rotation factor, which can improve the accuracy of IGFA. And to apply IGFA to the hierarchical clustering, the initial part and the movement operator were modified. Results Two kinds of experiments were used to test IGFA. And IGFA was applied to hierarchical clustering. The global minimum experiment was used with IGFA, GFA, GA (genetic algorithm) and SA (simulated annealing). Multi-minima experiment was used with IGFA and GFA. The two experiments results were compared with each other and proved the efficiency of IGFA. IGFA is better than GFA both in accuracy and running time. For the hierarchical clustering, IGFA is used to optimize the smallest distance of genes pairs, and the results were compared with GA and SA, singular-linkage clustering, UPGMA. The efficiency of IGFA is proved. PMID:23173043

  14. Ceramic Composite Intermediate Temperature Stress-Rupture Properties Improved Significantly

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.; Hurst, Janet B.

    2002-01-01

    Silicon carbide (SiC) composites are considered to be potential materials for future aircraft engine parts such as combustor liners. It is envisioned that on the hot side (inner surface) of the combustor liner, composites will have to withstand temperatures in excess of 1200 C for thousands of hours in oxidizing environments. This is a severe condition; however, an equally severe, if not more detrimental, condition exists on the cold side (outer surface) of the combustor liner. Here, the temperatures are expected to be on the order of 800 to 1000 C under high tensile stress because of thermal gradients and attachment of the combustor liner to the engine frame (the hot side will be under compressive stress, a less severe stress-state for ceramics). Since these composites are not oxides, they oxidize. The worst form of oxidation for strength reduction occurs at these intermediate temperatures, where the boron nitride (BN) interphase oxidizes first, which causes the formation of a glass layer that strongly bonds the fibers to the matrix. When the fibers strongly bond to the matrix or to one another, the composite loses toughness and strength and becomes brittle. To increase the intermediate temperature stress-rupture properties, researchers must modify the BN interphase. With the support of the Ultra-Efficient Engine Technology (UEET) Program, significant improvements were made as state-of-the-art SiC/SiC composites were developed during the Enabling Propulsion Materials (EPM) program. Three approaches were found to improve the intermediate-temperature stress-rupture properties: fiber-spreading, high-temperature silicon- (Si) doped boron nitride (BN), and outside-debonding BN.

  15. An Improved Neutron Transport Algorithm for Space Radiation

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.

    2000-01-01

    A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.

  16. Improved Contact Algorithms for Implicit FE Simulation of Sheet Forming

    NASA Astrophysics Data System (ADS)

    Zhuang, S.; Lee, M. G.; Keum, Y. T.; Wagoner, R. H.

    2007-05-01

    Implicit finite element simulations of sheet forming processes do not always converge, particularly for complex tool geometries and rapidly changing contact. The SHEET-3 program exhibits remarkable stability and strong convergence by use of its special N-CFS algorithm and a sheet normal defined by the mesh, but these features alone do not always guarantee convergence and accuracy. An improved contact capability within the N-CFS algorithm is formulated taking into account sheet thickness within the framework of shell elements. Two imaginary surfaces offset from the mid-plane of shell elements are implemented along the mesh normal direction. An efficient contact searching algorithm based on the mesh-patch tool description is formulated along the mesh normal direction. The contact search includes a general global searching procedure and a new local searching procedure enforcing the contact condition along the mesh normal direction. The processes of unconstrained cylindrical bending and drawing through a drawbead are simulated to verify the accuracy and convergence of the improved contact algorithm.

  17. An Adaptive Hybrid Genetic Algorithm for Improved Groundwater Remediation Design

    NASA Astrophysics Data System (ADS)

    Espinoza, F. P.; Minsker, B. S.; Goldberg, D. E.

    2001-12-01

    Identifying optimal designs for a groundwater remediation system is computationally intensive, especially for complex, nonlinear problems such as enhanced in situ bioremediation technology. To improve performance, we apply a hybrid genetic algorithm (HGA), which is a two-step solution method: a genetic algorithm (GA) for global search using the entire population and then a local search (LS) to improve search speed for only a few individuals in the population. We implement two types of HGAs: a non-adaptive HGA (NAHGA), whose operations are invariant throughout the run, and a self-adaptive HGA (SAHGA), whose operations adapt to the performance of the algorithm. The best settings of the two HGAs for optimal performance are then investigated for a groundwater remediation problem. The settings include the frequency of LS with respect to the normal GA evaluation, probability of individual selection for LS, evolution criterion for LS (Lamarckian or Baldwinian), and number of local search iterations. A comparison of the algorithms' performance under different settings will be presented.

  18. Improved K-means clustering algorithm for exploring local protein sequence motifs representing common structural property.

    PubMed

    Zhong, Wei; Altun, Gulsah; Harrison, Robert; Tai, Phang C; Pan, Yi

    2005-09-01

    Information about local protein sequence motifs is very important to the analysis of biologically significant conserved regions of protein sequences. These conserved regions can potentially determine the diverse conformation and activities of proteins. In this work, recurring sequence motifs of proteins are explored with an improved K-means clustering algorithm on a new dataset. The structural similarity of these recurring sequence clusters to produce sequence motifs is studied in order to evaluate the relationship between sequence motifs and their structures. To the best of our knowledge, the dataset used by our research is the most updated dataset among similar studies for sequence motifs. A new greedy initialization method for the K-means algorithm is proposed to improve traditional K-means clustering techniques. The new initialization method tries to choose suitable initial points, which are well separated and have the potential to form high-quality clusters. Our experiments indicate that the improved K-means algorithm satisfactorily increases the percentage of sequence segments belonging to clusters with high structural similarity. Careful comparison of sequence motifs obtained by the improved and traditional algorithms also suggests that the improved K-means clustering algorithm may discover some relatively weak and subtle sequence motifs, which are undetectable by the traditional K-means algorithms. Many biochemical tests reported in the literature show that these sequence motifs are biologically meaningful. Experimental results also indicate that the improved K-means algorithm generates more detailed sequence motifs representing common structures than previous research. Furthermore, these motifs are universally conserved sequence patterns across protein families, overcoming some weak points of other popular sequence motifs. The satisfactory result of the experiment suggests that this new K-means algorithm may be applied to other areas of bioinformatics

  19. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  20. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method.

  1. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    PubMed Central

    Amudha, P.; Karthik, S.; Sivakumari, S.

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  2. Improved algorithm of ray tracing in ICF cryogenic targets

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Yang, Yongying; Ling, Tong; Jiang, Jiabin

    2016-10-01

    The high precision ray tracing inside inertial confinement fusion (ICF) cryogenic targets plays an important role in the reconstruction of the three-dimensional density distribution by algebraic reconstruction technique (ART) algorithm. The traditional Runge-Kutta methods, which is restricted by the precision of the grid division and the step size of ray tracing, cannot make an accurate calculation in the case of refractive index saltation. In this paper, we propose an improved algorithm of ray tracing based on the Runge-Kutta methods and Snell's law of refraction to achieve high tracing precision. On the boundary of refractive index, we apply Snell's law of refraction and contact point search algorithm to ensure accuracy of the simulation. Inside the cryogenic target, the combination of the Runge-Kutta methods and self-adaptive step algorithm are employed for computation. The original refractive index data, which is used to mesh the target, can be obtained by experimental measurement or priori refractive index distribution function. A finite differential method is performed to calculate the refractive index gradient of mesh nodes, and the distance weighted average interpolation methods is utilized to obtain refractive index and gradient of each point in space. In the simulation, we take ideal ICF target, Luneberg lens and Graded index rod as simulation model to calculate the spot diagram and wavefront map. Compared the simulation results to Zemax, it manifests that the improved algorithm of ray tracing based on the fourth-order Runge-Kutta methods and Snell's law of refraction exhibits high accuracy. The relative error of the spot diagram is 0.2%, and the peak-to-valley (PV) error and the root-mean-square (RMS) error of the wavefront map is less than λ/35 and λ/100, correspondingly.

  3. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    NASA Astrophysics Data System (ADS)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  4. Efficient Improvement of Silage Additives by Using Genetic Algorithms

    PubMed Central

    Davies, Zoe S.; Gilbert, Richard J.; Merry, Roger J.; Kell, Douglas B.; Theodorou, Michael K.; Griffith, Gareth W.

    2000-01-01

    The enormous variety of substances which may be added to forage in order to manipulate and improve the ensilage process presents an empirical, combinatorial optimization problem of great complexity. To investigate the utility of genetic algorithms for designing effective silage additive combinations, a series of small-scale proof of principle silage experiments were performed with fresh ryegrass. Having established that significant biochemical changes occur over an ensilage period as short as 2 days, we performed a series of experiments in which we used 50 silage additive combinations (prepared by using eight bacterial and other additives, each of which was added at six different levels, including zero [i.e., no additive]). The decrease in pH, the increase in lactate concentration, and the free amino acid concentration were measured after 2 days and used to calculate a “fitness” value that indicated the quality of the silage (compared to a control silage made without additives). This analysis also included a “cost” element to account for different total additive levels. In the initial experiment additive levels were selected randomly, but subsequently a genetic algorithm program was used to suggest new additive combinations based on the fitness values determined in the preceding experiments. The result was very efficient selection for silages in which large decreases in pH and high levels of lactate occurred along with low levels of free amino acids. During the series of five experiments, each of which comprised 50 treatments, there was a steady increase in the amount of lactate that accumulated; the best treatment combination was that used in the last experiment, which produced 4.6 times more lactate than the untreated silage. The additive combinations that were found to yield the highest fitness values in the final (fifth) experiment were assessed to determine a range of biochemical and microbiological quality parameters during full-term silage

  5. Efficient improvement of silage additives by using genetic algorithms.

    PubMed

    Davies, Z S; Gilbert, R J; Merry, R J; Kell, D B; Theodorou, M K; Griffith, G W

    2000-04-01

    The enormous variety of substances which may be added to forage in order to manipulate and improve the ensilage process presents an empirical, combinatorial optimization problem of great complexity. To investigate the utility of genetic algorithms for designing effective silage additive combinations, a series of small-scale proof of principle silage experiments were performed with fresh ryegrass. Having established that significant biochemical changes occur over an ensilage period as short as 2 days, we performed a series of experiments in which we used 50 silage additive combinations (prepared by using eight bacterial and other additives, each of which was added at six different levels, including zero [i.e. , no additive]). The decrease in pH, the increase in lactate concentration, and the free amino acid concentration were measured after 2 days and used to calculate a "fitness" value that indicated the quality of the silage (compared to a control silage made without additives). This analysis also included a "cost" element to account for different total additive levels. In the initial experiment additive levels were selected randomly, but subsequently a genetic algorithm program was used to suggest new additive combinations based on the fitness values determined in the preceding experiments. The result was very efficient selection for silages in which large decreases in pH and high levels of lactate occurred along with low levels of free amino acids. During the series of five experiments, each of which comprised 50 treatments, there was a steady increase in the amount of lactate that accumulated; the best treatment combination was that used in the last experiment, which produced 4.6 times more lactate than the untreated silage. The additive combinations that were found to yield the highest fitness values in the final (fifth) experiment were assessed to determine a range of biochemical and microbiological quality parameters during full-term silage fermentation. We

  6. Improving permafrost distribution modelling using feature selection algorithms

    NASA Astrophysics Data System (ADS)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  7. Improved delay-leaping simulation algorithm for biochemical reaction systems with delays

    NASA Astrophysics Data System (ADS)

    Yi, Na; Zhuang, Gang; Da, Liang; Wang, Yifei

    2012-04-01

    In biochemical reaction systems dominated by delays, the simulation speed of the stochastic simulation algorithm depends on the size of the wait queue. As a result, it is important to control the size of the wait queue to improve the efficiency of the simulation. An improved accelerated delay stochastic simulation algorithm for biochemical reaction systems with delays, termed the improved delay-leaping algorithm, is proposed in this paper. The update method for the wait queue is effective in reducing the size of the queue as well as shortening the storage and access time, thereby accelerating the simulation speed. Numerical simulation on two examples indicates that this method not only obtains a more significant efficiency compared with the existing methods, but also can be widely applied in biochemical reaction systems with delays.

  8. Missile placement analysis based on improved SURF feature matching algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kaida; Zhao, Wenjie; Li, Dejun; Gong, Xiran; Sheng, Qian

    2015-03-01

    The precious battle damage assessment by use of video images to analysis missile placement is a new study area. The article proposed an improved speeded up robust features algorithm named restricted speeded up robust features, which combined the combat application of TV-command-guided missiles and the characteristics of video image. Its restrictions mainly reflected in two aspects, one is to restrict extraction area of feature point; the second is to restrict the number of feature points. The process of missile placement analysis based on video image was designed and a video splicing process and random sample consensus purification were achieved. The RSURF algorithm is proved that has good realtime performance on the basis of guarantee the accuracy.

  9. Improvement of Passive Microwave Rainfall Retrieval Algorithm over Mountainous Terrain

    NASA Astrophysics Data System (ADS)

    Shige, S.; Yamamoto, M.

    2015-12-01

    The microwave radiometer (MWR) algorithms underestimate heavy rainfall associated with shallow orographic rainfall systems owing to weak ice scattering signatures. Underestimation of the Global Satellite Mapping of Precipitation (GSMaP) MWR has been mitigated by an orographic/nonorographic rainfall classification scheme (Shige et al. 2013, 2015; Taniguchi et al. 2013; Yamamoto and Shige 2015). The orographic/nonorographic rainfall classification scheme is developed on the basis of orographically forced upward vertical motion and the convergence of surface moisture flux estimated from ancillary data. Lookup tables derived from orographic precipitation profiles are used to estimate rainfall for an orographic rainfall pixel, whereas those derived from original precipitation profiles are used to estimate rainfall for a nonorographic rainfall pixel. The orographic/nonorographic rainfall classification scheme has been used by the version of GSMaP products, which are available in near real time (about 4 h after observation) via the Internet (http://sharaku.eorc.jaxa.jp/GSMaP/index.htm). The current version of GSMaP MWR algorithm with the orographic/nonorographic rainfall classification scheme improves rainfall estimation over the entire tropical region, but there is still room for improvement. In this talk, further improvement of orographic rainfall retrievals will be shown.

  10. Scheduling Algorithm for Improving Lift (SAIL): Phase 1, documentation

    SciTech Connect

    Hawthorne, J.E.; MeLaren, R.A.

    1988-07-01

    The Military Sealift Command, a component of the United States Transportational Command, is responsible for the Sealift of military personnel and material during a crisis. Conceptual plans for these complex moves, called ''deliberate plans,'' are continually being prepared. A computer-based scheduling system, the Sealift Strategic Analysis Subsystem (SEASTRAT), is under development for assisting on the production of these plans. The ship scheduling portion of this system, the Scheduling Algorithm foe Improving Lift (SAIL), combines linear optimization and heuristic methods to determine ship routes and cargo loadings which honor a variety of complex operational constraints. 13 refs., 2 figs., 2 tabs.

  11. Improved zerotree coding algorithm for wavelet image compression

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Li, Yunsong; Wu, Chengke

    2000-12-01

    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  12. Improved OSIRIS NO2 retrieval algorithm: description and validation

    NASA Astrophysics Data System (ADS)

    Sioris, Christopher E.; Rieger, Landon A.; Lloyd, Nicholas D.; Bourassa, Adam E.; Roth, Chris Z.; Degenstein, Douglas A.; Camy-Peyret, Claude; Pfeilsticker, Klaus; Berthet, Gwenaël; Catoire, Valéry; Goutail, Florence; Pommereau, Jean-Pierre; McLinden, Chris A.

    2017-03-01

    A new retrieval algorithm for OSIRIS (Optical Spectrograph and Infrared Imager System) nitrogen dioxide (NO2) profiles is described and validated. The algorithm relies on spectral fitting to obtain slant column densities of NO2, followed by inversion using an algebraic reconstruction technique and the SaskTran spherical radiative transfer model (RTM) to obtain vertical profiles of local number density. The validation covers different latitudes (tropical to polar), years (2002-2012), all seasons (winter, spring, summer, and autumn), different concentrations of nitrogen dioxide (from denoxified polar vortex to polar summer), a range of solar zenith angles (68.6-90.5°), and altitudes between 10.5 and 39 km, thereby covering the full retrieval range of a typical OSIRIS NO2 profile. The use of a larger spectral fitting window than used in previous retrievals reduces retrieval uncertainties and the scatter in the retrieved profiles due to noisy radiances. Improvements are also demonstrated through the validation in terms of bias reduction at 15-17 km relative to the OSIRIS operational v3.0 algorithm. The diurnal variation of NO2 along the line of sight is included in a fully spherical multiple scattering RTM for the first time. Using this forward model with built-in photochemistry, the scatter of the differences relative to the correlative balloon NO2 profile data is reduced.

  13. An improved algorithm of fiber tractography demonstrates postischemic cerebral reorganization

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-dong; Lu, Jie; Yao, Li; Li, Kun-cheng; Zhao, Xiao-jie

    2008-03-01

    In vivo white matter tractography by diffusion tensor imaging (DTI) accurately represents the organizational architecture of white matter in the vicinity of brain lesions and especially ischemic brain. In this study, we suggested an improved fiber tracking algorithm based on TEND, called TENDAS, for tensor deflection with adaptive stepping, which had been introduced a stepping framework for interpreting the algorithm behavior as a function of the tensor shape (linear-shaped or not) and tract history. The propagation direction at each step was given by the deflection vector. TENDAS tractography was used to examine a 17-year-old recovery patient with congenital right hemisphere artery stenosis combining with fMRI. Meaningless picture location was used as spatial working memory task in this study. We detected the shifted functional localization to the contralateral homotypic cortex and more prominent and extensive left-sided parietal and medial frontal cortical activations which were used directly as seed mask for tractography for the reconstruction of individual spatial parietal pathways. Comparing with the TEND algorithms, TENDAS shows smoother and less sharp bending characterization of white matter architecture of the parietal cortex. The results of this preliminary study were twofold. First, TENDAS may provide more adaptability and accuracy in reconstructing certain anatomical features, whereas it is very difficult to verify tractography maps of white matter connectivity in the living human brain. Second, our study indicates that combination of TENDAS and fMRI provide a unique image of functional cortical reorganization and structural modifications of postischemic spatial working memory.

  14. Corticosteroid transdermal delivery significantly improves arthritis pain and functional disability.

    PubMed

    Iannitti, Tommaso; McDermott, Michael F; Laurino, Carmen; Malagoli, Andrea; Palmieri, Beniamino

    2017-02-01

    Arthritis is characterized by pain and functional limitation affecting the patients' quality of life. We performed a clinical study to investigate the efficacy of a betamethasone valerate medicated plaster (Betesil) in improving pain and functional disability in patients with arthritis and osteoarthritis. We enrolled 104 patients affected by osteoarthritis (n = 40) or arthritis (n = 64) in different joints. Patients received diclofenac sodium cream (2 g, four times a day) or a 2.25-mg dose of Betesil applied to the painful joint every night before bedtime for 10 days. Pain and functional disability were assessed, by the Visual Analogue Scale (VAS) and Western Ontario McMaster Universities Osteoarthritis Index (WOMAC) scores. Redness was assessed by clinical inspection, and edema by the "fovea sign" method. C-reactive protein (CRP) was also measured; CRP can be used to cost-effectively monitor the pharmacological treatment efficacy and is increased during the acute-phase response, returning to physiological values after tissue recovery and functional restoration. All measurements were at baseline and at 10-day follow-up. At 10-day follow-up, a greater improvement in VAS and WOMAC pain and WOMAC stiffness and functional limitation scores from baseline was observed in patients treated with Betesil compared with diclofenac (all p < 0.01). At 10-day follow-up, improvement in redness, edema, and CRP levels from baseline was also greater in patients treated with Betesil compared with diclofenac (all p < 0.01). This study demonstrates the safety and efficacy of transdermal delivery of betamethasone valerate in patients affected by arthritis and osteoarthritis.

  15. A significantly improved membrane for vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Jia, Chuankun; Liu, Jianguo; Yan, Chuanwei

    A novel sandwich-type sulfonated poly(ether ether ketone) (SPEEK)/tungstophosphoric acid (TPA)/polypropylene (PP) composite membrane for a vanadium redox flow battery (VRB) has been developed with improved properties: the permeability of vanadium ions is greatly reduced and the performance of the VRB cell is greatly increased. The membrane is based on a traditional SPEEK membrane embedded with TPA but PP is used to enhance the membrane for the first time. Although its voltage efficiency (VE) is a little lower than that of a Nafion 212 membrane, it is expected to have good prospects for VRB systems because of its low cost and good performance.

  16. Creating a Middle Grades Environment that Significantly Improves Student Achievement

    ERIC Educational Resources Information Center

    L'Esperance, Mark E.; Lenker, Ethan; Bullock, Ann; Lockamy, Becky; Mason, Cathy

    2013-01-01

    This article offers an overview of the framework that Sampson County Public Schools (North Carolina) used to critically reflect on the current state of their middle grades schools. The article also highlights the changes that resulted from the district-wide analysis and the ways in which these changes led to a significant increase in the academic…

  17. An improved Hochberg procedure for multiple tests of significance.

    PubMed

    Rom, Dror M

    2013-02-01

    We propose a simple modification of Hochberg's step-up Bonferroni procedure for multiple tests of significance. The proposed procedure is always more powerful than Hochberg's procedure for more than two tests, and is more powerful than Hommel's procedure for three and four tests. A numerical analysis of the new procedure indicates that its Type I error is controlled under independence of the test statistics, at a level equal to or just below the nominal Type I error. Examination of various non-null configurations of hypotheses shows that the modified procedure has a power advantage over Hochberg's procedure which increases in relationship to the number of false hypotheses.

  18. Solifenacin significantly improves all symptoms of overactive bladder syndrome

    PubMed Central

    CHAPPLE, C R; CARDOZO, L; STEERS, W D; GOVIER, F E

    2006-01-01

    Overactive bladder syndrome (OAB) is a chronic condition characterised by urgency, with or without associated urge incontinence. Solifenacin succinate is a once daily, bladder selective antimuscarinic available in two doses (5 and 10 mg). The recommended dose is 5 mg once daily and can be increased to 10 mg once daily if 5 mg is well tolerated. This article presents pooled efficacy and safety data from four large, placebo-controlled, multinational phase III trials of solifenacin succinate with a total enrolment of over 2800 patients. Data from these trials show that solifenacin 5 and 10 mg once daily is significantly more effective than placebo at reducing urgency, incontinence, micturition frequency and nocturia and at increasing volume voided per micturition. Adverse events were mainly mild-to-moderate in all treatment groups. The results of these phase III trials support the use of solifenacin in the treatment of OAB. PMID:16893438

  19. Accuracy of pitch matching significantly improved by live voice model.

    PubMed

    Granot, Roni Y; Israel-Kolatt, Rona; Gilboa, Avi; Kolatt, Tsafrir

    2013-05-01

    Singing is, undoubtedly, the most fundamental expression of our musical capacity, yet an estimated 10-15% of Western population sings "out-of-tune (OOT)." Previous research in children and adults suggests, albeit inconsistently, that imitating a human voice can improve pitch matching. In the present study, we focus on the potentially beneficial effects of the human voice and especially the live human voice. Eighteen participants varying in their singing abilities were required to imitate in singing a set of nine ascending and descending intervals presented to them in five different randomized blocked conditions: live piano, recorded piano, live voice using optimal voice production, recorded voice using optimal voice production, and recorded voice using artificial forced voice production. Pitch and interval matching in singing were much more accurate when participants repeated sung intervals as compared with intervals played to them on the piano. The advantage of the vocal over the piano stimuli was robust and emerged clearly regardless of whether piano tones were played live and in full view or were presented via recording. Live vocal stimuli elicited higher accuracy than recorded vocal stimuli, especially when the recorded vocal stimuli were produced in a forced vocal production. Remarkably, even those who would be considered OOT singers on the basis of their performance when repeating piano tones were able to pitch match live vocal sounds, with deviations well within the range of what is considered accurate singing (M=46.0, standard deviation=39.2 cents). In fact, those participants who were most OOT gained the most from the live voice model. Results are discussed in light of the dual auditory-motor encoding of pitch analogous to that found in speech.

  20. Improvement of Service Searching Algorithm in the JVO Portal Site

    NASA Astrophysics Data System (ADS)

    Eguchi, S.; Shirasak, Y.; Komiya, Y.; Ohishi, M.; Mizumoto, Y.; Ishihara, Y.; Tsutsumi, J.; Hiyama, T.; Nakamoto, H.; Sakamoto, M.

    2012-09-01

    The Virtual Observatory (VO) consists of a huge amount of astronomical databases which contain both of theoretical and observational data obtained with various methods, telescopes, and instruments. Since VO provides raw and processed observational data, astronomers can concentrate themselves on their scientific interests without awareness of instruments; all they have to know is which service provides their interested data. On the other hand, services on the VO system will be better used if queries can be made by means of telescopes, wavelengths, and object types; currently it is difficult for newcomers to find desired ones. We have recently started a project towards improving the data service functionality and usability on the Japanese VO (JVO) portal site. We are now working on implementation of a function to automatically classify all services on VO in terms of telescopes and instruments without referring to the facility and instrument keywords, which are not always filled in most cases. In the paper, we report a new algorithm towards constructing the facility and instrument keywords from other information of a service, and discuss its effectiveness. We also propose a new user interface of the portal site with this algorithm.

  1. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  2. Protein-fold recognition using an improved single-source K diverse shortest paths algorithm.

    PubMed

    Lhota, John; Xie, Lei

    2016-04-01

    Protein structure prediction, when construed as a fold recognition problem, is one of the most important applications of similarity search in bioinformatics. A new protein-fold recognition method is reported which combines a single-source K diverse shortest path (SSKDSP) algorithm with Enrichment of Network Topological Similarity (ENTS) algorithm to search a graphic feature space generated using sequence similarity and structural similarity metrics. A modified, more efficient SSKDSP algorithm is developed to improve the performance of graph searching. The new implementation of the SSKDSP algorithm empirically requires 82% less memory and 61% less time than the current implementation, allowing for the analysis of larger, denser graphs. Furthermore, the statistical significance of fold ranking generated from SSKDSP is assessed using ENTS. The reported ENTS-SSKDSP algorithm outperforms original ENTS that uses random walk with restart for the graph search as well as other state-of-the-art protein structure prediction algorithms HHSearch and Sparks-X, as evaluated by a benchmark of 600 query proteins. The reported methods may easily be extended to other similarity search problems in bioinformatics and chemoinformatics. The SSKDSP software is available at http://compsci.hunter.cuny.edu/~leixie/sskdsp.html.

  3. New image compression algorithm based on improved reversible biorthogonal integer wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Libao; Yu, Xianchuan

    2012-10-01

    The low computational complexity and high coding efficiency are the most significant requirements for image compression and transmission. Reversible biorthogonal integer wavelet transform (RB-IWT) supports the low computational complexity by lifting scheme (LS) and allows both lossy and lossless decoding using a single bitstream. However, RB-IWT degrades the performances and peak signal noise ratio (PSNR) of the image coding for image compression. In this paper, a new IWT-based compression scheme based on optimal RB-IWT and improved SPECK is presented. In this new algorithm, the scaling parameter of each subband is chosen for optimizing the transform coefficient. During coding, all image coefficients are encoding using simple, efficient quadtree partitioning method. This scheme is similar to the SPECK, but the new method uses a single quadtree partitioning instead of set partitioning and octave band partitioning of original SPECK, which reduces the coding complexity. Experiment results show that the new algorithm not only obtains low computational complexity, but also provides the peak signal-noise ratio (PSNR) performance of lossy coding to be comparable to the SPIHT algorithm using RB-IWT filters, and better than the SPECK algorithm. Additionally, the new algorithm supports both efficiently lossy and lossless compression using a single bitstream. This presented algorithm is valuable for future remote sensing image compression.

  4. An improved spectral graph partitioning algorithm for mapping parallel computations

    SciTech Connect

    Hendrickson, B.; Leland, R.

    1992-09-01

    Efficient use of a distributed memory parallel computer requires that the computational load be balanced across processors in a way that minimizes interprocessor communication. We present a new domain mapping algorithm that extends recent work in which ideas from spectral graph theory have been applied to this problem. Our generalization of spectral graph bisection involves a novel use of multiple eigenvectors to allow for division of a computation into four or eight parts at each stage of a recursive decomposition. The resulting method is suitable for scientific computations like irregular finite elements or differences performed on hypercube or mesh architecture machines. Experimental results confirm that the new method provides better decompositions arrived at more economically and robustly than with previous spectral methods. We have also improved upon the known spectral lower bound for graph bisection.

  5. An improved conscan algorithm based on a Kalman filter

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.

    1994-01-01

    Conscan is commonly used by DSN antennas to allow adaptive tracking of a target whose position is not precisely known. This article describes an algorithm that is based on a Kalman filter and is proposed to replace the existing fast Fourier transform based (FFT-based) algorithm for conscan. Advantages of this algorithm include better pointing accuracy, continuous update information, and accommodation of missing data. Additionally, a strategy for adaptive selection of the conscan radius is proposed. The performance of the algorithm is illustrated through computer simulations and compared to the FFT algorithm. The results show that the Kalman filter algorithm is consistently superior.

  6. [Tree species information extraction of farmland returned to forests based on improved support vector machine algorithm].

    PubMed

    Wu, Jian; Peng, Dao-Li

    2011-04-01

    The difference analysis of spectrum among tree species and the improvement of classification algorithm are the difficult points of extracting tree species information using remote sensing images, and are also the keys to improving the accuracy in the tree species information extraction in farmland returned to forests area. TM images were selected in this study, and the spectral indexes that could distinguish tree species information were filtered by analyzing tree species spectrum. Afterwards, the information of tree species was extracted using improved support vector machine algorithm. Although errors and confusion exist, this method shows satisfying results with an overall accuracy of 81.7%. The corresponding result of the traditional method is 72.5%. The method in this paper can achieve a more precise information extraction of tree species and the results can meet the demand of accurate monitoring and decision-making. This method is significant to the rapid assessment of project quality.

  7. Unmasking translucent protein particles by improved micro-flow imaging™ algorithms.

    PubMed

    Pedersen, Jesper Søndergaard; Persson, Malin

    2014-01-01

    Micro-flow imaging (MFI(™) ) is an increasingly important technique for the characterization of subvisible particles during the development of biopharmaceutical products. Protein particles are highly variable in size, appearance, and translucency posing challenges to optical detection techniques. We have developed a set of standard statistical tests applicable for routine evaluation of MFI™ particle dataset quality. The tests evaluate the spatial randomness of particles using nearest neighbor and quadrat analysis. Using case studies of stressed antibody samples, we demonstrate how the tests uncover fragmentation artifacts and uneven detector sensitivity for translucent particles. To improve the detection of translucent particles, a new local pixel intensity variance particle detection algorithm has been developed. The improved algorithm significantly decreases fragmentation artifacts, and also increases sensitivity toward translucent particles in general. Our results highlight current limitations and the potential for improvements in the optical detection techniques for subvisible protein aggregates.

  8. New Classification Method Based on Support-Significant Association Rules Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Guoxin; Shi, Wen

    One of the most well-studied problems in data mining is mining for association rules. There was also research that introduced association rule mining methods to conduct classification tasks. These classification methods, based on association rule mining, could be applied for customer segmentation. Currently, most of the association rule mining methods are based on a support-confidence structure, where rules satisfied both minimum support and minimum confidence were returned as strong association rules back to the analyzer. But, this types of association rule mining methods lack of rigorous statistic guarantee, sometimes even caused misleading. A new classification model for customer segmentation, based on association rule mining algorithm, was proposed in this paper. This new model was based on the support-significant association rule mining method, where the measurement of confidence for association rule was substituted by the significant of association rule that was a better evaluation standard for association rules. Data experiment for customer segmentation from UCI indicated the effective of this new model.

  9. Further development of an improved altimeter wind speed algorithm

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Wentz, Frank J.

    1986-01-01

    A previous altimeter wind speed retrieval algorithm was developed on the basis of wind speeds in the limited range from about 4 to 14 m/s. In this paper, a new approach which gives a wind speed model function applicable over the range 0 to 21 m/s is used. The method is based on comparing 50 km along-track averages of the altimeter normalized radar cross section measurements with neighboring off-nadir scatterometer wind speed measurements. The scatterometer winds are constructed from 100 km binned measurements of radar cross section and are located approximately 200 km from the satellite subtrack. The new model function agrees very well with earlier versions up to wind speeds of 14 m/s, but differs significantly at higher wind speeds. The relevance of these results to the Geosat altimeter launched in March 1985 is discussed.

  10. Improvements and Extensions for Joint Polar Satellite System Algorithms

    NASA Astrophysics Data System (ADS)

    Grant, K. D.; Feeley, J. H.; Miller, S. W.; Jamilkowski, M. L.

    2014-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS replaced the afternoon orbit component and ground processing system of the old POES system managed by the NOAA. JPSS satellites will carry sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for the JPSS is the Common Ground System (CGS), and provides command, control, and communications (C3), data processing and product delivery. CGS's data processing capability processes the data from the JPSS satellites to provide environmental data products (including Sensor Data Records (SDRs) and Environmental Data Records (EDRs)) to the NOAA Satellite Operations Facility. The first satellite in the JPSS constellation, known as the Suomi National Polar-orbiting Partnership (S-NPP) satellite, was launched on 28 October 2011. CGS is currently processing and delivering SDRs and EDRs for S-NPP and will continue through the lifetime of the JPSS program. The EDRs for S-NPP are currently undergoing an extensive Calibration and Validation (Cal/Val) campaign. Changes identified by the Cal/Val campaign are coming available for implementation into the operational system in support of both S-NPP and JPSS-1 (scheduled for launch in 2017). Some of these changes will be available in time to update the S-NPP algorithm baseline, while others will become operational just prior to JPSS-1 launch. In addition, new capabilities, such as higher spectral and spatial resolution, will be exercised on JPSS-1. This paper will describe changes to current algorithms and products as a result of the Cal/Val campaign and related initiatives for improved capabilities. Improvements include Cross Track Infrared Sounder high spectral

  11. Improvement of retrieval algorithms for severe air pollution

    NASA Astrophysics Data System (ADS)

    Mukai, Sonoyo; Sano, Itaru; Nakata, Makiko

    2016-10-01

    Increased emissions of anthropogenic aerosols associated with economic growth can lead to increased concentrations of hazardous air pollutants. Furthermore, dust storms or biomass burning plumes can cause serious environmental hazards, yet their aerosol properties are poorly understood. Our research group has worked on the development of an efficient algorithm for aerosol retrieval during hazy episodes (dense concentrations of atmospheric aerosols). It is noted that near UV measurements are available for detection of carbonaceous aerosols. The biomass burning aerosols (BBA) due to large-scale forest fires and/or burn agriculture exacerbated the severe air pollution. It is known that global warming and climate change have caused increasing instances of forest fires, which have in turn accelerated climate change. It is well known that this negative cycle decreases the quality of the global environment and human health. The Japan Aerospace Exploration Agency (JAXA) has been developing a new Earth observing system, the GCOM (Global Change Observation Mission) project, which consists of two satellite series: GCOM-W1 and GCOM-C1. The first GCOM-C satellite will board the SGLI (second generation GLI [global imager]) to be launched in early 2017. The SGLI is capable of multi-channel (19) observation, including a near UV channel (0.380 μm) and two polarization channels at red and near-infrared wavelengths of 0.67 and 0.87 μm. Thus, global aerosol retrieval will be achieved with simultaneous polarization and total radiance. In this study, algorithm improvement for aerosol remote sensing, especially of BBA episodes, is examined using Terra/MODIS measurements from 2003, when the GLI and POLDER-2 sensors were working onboard the Japanese satellite ADEOS-2.

  12. Research on video target tracking technology based on improved SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhemin; Guo, Zhijie; Yuang, Ye

    2017-01-01

    A novel target tracking algorithm based on improved SIFT (Scale Invariant Feature Transform (SIFT) algorithm is proposed in this paper. In order to improve real-time performance, the processing neighborhood of SIFT has been improved to decrease the complexity of calculation, and the dimension of the SIFT vector is set from 128 to 40. Simulations and experiments show this improved algorithm brings us low computation complexity and high tracking accuracy and robustness.

  13. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  14. The photon dose calculation algorithm used in breast radiotherapy has significant impact on the parameters of radiobiological models.

    PubMed

    Petillion, Saskia; Swinnen, Ans; Defraene, Gilles; Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank

    2014-07-08

    The comparison of the pencil beam dose calculation algorithm with modified Batho heterogeneity correction (PBC-MB) and the analytical anisotropic algorithm (AAA) and the mutual comparison of advanced dose calculation algorithms used in breast radiotherapy have focused on the differences between the physical dose distributions. Studies on the radiobiological impact of the algorithm (both on the tumor control and the moderate breast fibrosis prediction) are lacking. We, therefore, investigated the radiobiological impact of the dose calculation algorithm in whole breast radiotherapy. The clinical dose distributions of 30 breast cancer patients, calculated with PBC-MB, were recalculated with fixed monitor units using more advanced algorithms: AAA and Acuros XB. For the latter, both dose reporting modes were used (i.e., dose-to-medium and dose-to-water). Next, the tumor control probability (TCP) and the normal tissue complication probability (NTCP) of each dose distribution were calculated with the Poisson model and with the relative seriality model, respectively. The endpoint for the NTCP calculation was moderate breast fibrosis five years post treatment. The differences were checked for significance with the paired t-test. The more advanced algorithms predicted a significantly lower TCP and NTCP of moderate breast fibrosis then found during the corresponding clinical follow-up study based on PBC calculations. The differences varied between 1% and 2.1% for the TCP and between 2.9% and 5.5% for the NTCP of moderate breast fibrosis. The significant differences were eliminated by determination of algorithm-specific model parameters using least square fitting. Application of the new parameters on a second group of 30 breast cancer patients proved their appropriateness. In this study, we assessed the impact of the dose calculation algorithms used in whole breast radiotherapy on the parameters of the radiobiological models. The radiobiological impact was eliminated by

  15. Utilization of advanced clutter suppression algorithms for improved standoff detection and identification of radionuclide threats

    NASA Astrophysics Data System (ADS)

    Cosofret, Bogdan R.; Shokhirev, Kirill; Mulhall, Phil; Payne, David; Harris, Bernard

    2014-05-01

    Technology development efforts seek to increase the capability of detection systems in low Signal-to-Noise regimes encountered in both portal and urban detection applications. We have recently demonstrated significant performance enhancement in existing Advanced Spectroscopic Portals (ASP), Standoff Radiation Detection Systems (SORDS) and handheld isotope identifiers through the use of new advanced detection and identification algorithms. The Poisson Clutter Split (PCS) algorithm is a novel approach for radiological background estimation that improves the detection and discrimination capability of medium resolution detectors. The algorithm processes energy spectra and performs clutter suppression, yielding de-noised gamma-ray spectra that enable significant enhancements in detection and identification of low activity threats with spectral target recognition algorithms. The performance is achievable at the short integration times (0.5 - 1 second) necessary for operation in a high throughput and dynamic environment. PCS has been integrated with ASP, SORDS and RIID units and evaluated in field trials. We present a quantitative analysis of algorithm performance against data collected by a range of systems in several cluttered environments (urban and containerized) with embedded check sources. We show that the algorithm achieves a high probability of detection/identification with low false alarm rates under low SNR regimes. For example, utilizing only 4 out of 12 NaI detectors currently available within an ASP unit, PCS processing demonstrated Pd,ID > 90% at a CFAR (Constant False Alarm Rate) of 1 in 1000 occupancies against weak activity (7 - 8μCi) and shielded sources traveling through the portal at 30 mph. This vehicle speed is a factor of 6 higher than was previously possible and results in significant increase in system throughput and overall performance.

  16. Improving space object detection using a Fourier likelihood ratio detection algorithm

    NASA Astrophysics Data System (ADS)

    Becker, David J.; Cain, Stephen C.

    2016-09-01

    In this paper a new detection algorithm is proposed and developed for detecting space objects from images obtained using a ground-based telescope with the goal to improve space situational awareness. Most current space object detection algorithms rely on developing a likelihood ratio test (LRT) for the observed data based on a binary hypothesis test. These algorithms are based on the assumption that the observed data is Gaussian or Poisson distributed under both the hypothesis that a low signal-to-noise ratio (SNR) space object is present in the data and the hypothesis that an object is absent from the data. The LRT algorithm in this paper was developed based on the assumption that the distribution of the Fourier transform of the observed data will be different when a low SNR object is present in the data compared to when the data only contains background noise and known space objects. When an object is present the probability distribution of the real component of the Fourier transform of the intensity was found to follow a Gaussian distribution with a mean significantly different than in the data that doesn't contain an object even at low SNR levels. As the separation of these two probability distribution functions increases, it becomes more likely that an object can be detected. In this paper, simulated data are used to demonstrate the effectiveness and to highlight the benefits gained from this algorithm.

  17. The control algorithm improving performance of electric load simulator

    NASA Astrophysics Data System (ADS)

    Guo, Chenxia; Yang, Ruifeng; Zhang, Peng; Fu, Mengyao

    2017-01-01

    In order to improve dynamic performance and signal tracking accuracy of electric load simulator, the influence of the moment of inertia, stiffness, friction, gaps and other factors on the system performance were analyzed on the basis of researching the working principle of load simulator in this paper. The PID controller based on Wavelet Neural Network was used to achieve the friction nonlinear compensation, while the gap inverse model was used to compensate the gap nonlinear. The compensation results were simulated by MATLAB software. It was shown that the follow-up performance of sine response curve of the system became better after compensating, the track error was significantly reduced, the accuracy was improved greatly and the system dynamic performance was improved.

  18. Improvement of unsupervised texture classification based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Togami, Yuuki; Arai, Kohei

    2004-11-01

    At the previous conference, the authors are proposed a new unsupervised texture classification method based on the genetic algorithms (GA). In the method, the GA are employed to determine location and size of the typical textures in the target image. The proposed method consists of the following procedures: 1) the determination of the number of classification category; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed; 6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. However, this method has not been automated because it requires not only target image but also the number of categories for classification. In this paper, we describe some improvement for implementation of automated texture classification. Some experiments are conducted to evaluate classification capability of the proposed method by using images from Brodatz's photo album and actual airborne multispectral scanner. The experimental results show that the proposed method can select appropriate texture samples and can provide reasonable classification results.

  19. Improving the Energy Market: Algorithms, Market Implications, and Transmission Switching

    NASA Astrophysics Data System (ADS)

    Lipka, Paula Ann

    This dissertation aims to improve ISO operations through a better real-time market solution algorithm that directly considers both real and reactive power, finds a feasible Alternating Current Optimal Power Flow solution, and allows for solving transmission switching problems in an AC setting. Most of the IEEE systems do not contain any thermal limits on lines, and the ones that do are often not binding. Chapter 3 modifies the thermal limits for the IEEE systems to create new, interesting test cases. Algorithms created to better solve the power flow problem often solve the IEEE cases without line limits. However, one of the factors that makes the power flow problem hard is thermal limits on the lines. The transmission networks in practice often have transmission lines that become congested, and it is unrealistic to ignore line limits. Modifying the IEEE test cases makes it possible for other researchers to be able to test their algorithms on a setup that is closer to the actual ISO setup. This thesis also examines how to convert limits given on apparent power---as is in the case in the Polish test systems---to limits on current. The main consideration in setting line limits is temperature, which linearly relates to current. Setting limits on real or apparent power is actually a proxy for using the limits on current. Therefore, Chapter 3 shows how to convert back to the best physical representation of line limits. A sequential linearization of the current-voltage formulation of the Alternating Current Optimal Power Flow (ACOPF) problem is used to find an AC-feasible generator dispatch. In this sequential linearization, there are parameters that are set to the previous optimal solution. Additionally, to improve accuracy of the Taylor series approximations that are used, the movement of the voltage is restricted. The movement of the voltage is allowed to be very large at the first iteration and is restricted further on each subsequent iteration, with the restriction

  20. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  1. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  2. Some Improvements on Signed Window Algorithms for Scalar Multiplications in Elliptic Curve Cryptosystems

    NASA Technical Reports Server (NTRS)

    Vo, San C.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Scalar multiplication is an essential operation in elliptic curve cryptosystems because its implementation determines the speed and the memory storage requirements. This paper discusses some improvements on two popular signed window algorithms for implementing scalar multiplications of an elliptic curve point - Morain-Olivos's algorithm and Koyarna-Tsuruoka's algorithm.

  3. Improvement of phase unwrapping algorithm based on image segmentation and merging

    NASA Astrophysics Data System (ADS)

    Wang, Huaying; Liu, Feifei; Zhu, Qiaofen

    2013-11-01

    A modified algorithm based on image segmentation and merging is proposed and demonstrated to improve the accuracy of the phase unwrapping algorithm. There are three improved aspects. Firstly, the method of unequal region segmentation is taken, which can make the regional information to be completely and accurately reproduced. Secondly, for the condition of noise and undersampling in different regions, different phase unwrapping algorithms are used, respectively. Lastly, for the sake of improving the accuracy of the phase unwrapping results, a method of weighted stack is applied to the overlapping region originated from blocks merging. The proposed algorithm has been verified by simulations and experiments. The results not only validate the accuracy and rapidity of the improved algorithm to recover the phase information of the measured object, but also illustrate the importance of the improved algorithm in Traditional Chinese Medicine Decoction Pieces cell identification.

  4. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  5. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    PubMed Central

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  6. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    PubMed

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.

  7. An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing

    NASA Astrophysics Data System (ADS)

    Zhao, Yunji; Pei, Hailong

    In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.

  8. Mapping Soil Properties of Africa at 250 m Resolution: Random Forests Significantly Improve Current Predictions

    PubMed Central

    Hengl, Tomislav; Heuvelink, Gerard B. M.; Kempen, Bas; Leenaars, Johan G. B.; Walsh, Markus G.; Shepherd, Keith D.; Sila, Andrew; MacMillan, Robert A.; Mendes de Jesus, Jorge; Tamene, Lulseged; Tondoh, Jérôme E.

    2015-01-01

    80% of arable land in Africa has low soil fertility and suffers from physical soil problems. Additionally, significant amounts of nutrients are lost every year due to unsustainable soil management practices. This is partially the result of insufficient use of soil management knowledge. To help bridge the soil information gap in Africa, the Africa Soil Information Service (AfSIS) project was established in 2008. Over the period 2008–2014, the AfSIS project compiled two point data sets: the Africa Soil Profiles (legacy) database and the AfSIS Sentinel Site database. These data sets contain over 28 thousand sampling locations and represent the most comprehensive soil sample data sets of the African continent to date. Utilizing these point data sets in combination with a large number of covariates, we have generated a series of spatial predictions of soil properties relevant to the agricultural management—organic carbon, pH, sand, silt and clay fractions, bulk density, cation-exchange capacity, total nitrogen, exchangeable acidity, Al content and exchangeable bases (Ca, K, Mg, Na). We specifically investigate differences between two predictive approaches: random forests and linear regression. Results of 5-fold cross-validation demonstrate that the random forests algorithm consistently outperforms the linear regression algorithm, with average decreases of 15–75% in Root Mean Squared Error (RMSE) across soil properties and depths. Fitting and running random forests models takes an order of magnitude more time and the modelling success is sensitive to artifacts in the input data, but as long as quality-controlled point data are provided, an increase in soil mapping accuracy can be expected. Results also indicate that globally predicted soil classes (USDA Soil Taxonomy, especially Alfisols and Mollisols) help improve continental scale soil property mapping, and are among the most important predictors. This indicates a promising potential for transferring pedological

  9. Mapping Soil Properties of Africa at 250 m Resolution: Random Forests Significantly Improve Current Predictions.

    PubMed

    Hengl, Tomislav; Heuvelink, Gerard B M; Kempen, Bas; Leenaars, Johan G B; Walsh, Markus G; Shepherd, Keith D; Sila, Andrew; MacMillan, Robert A; Mendes de Jesus, Jorge; Tamene, Lulseged; Tondoh, Jérôme E

    2015-01-01

    80% of arable land in Africa has low soil fertility and suffers from physical soil problems. Additionally, significant amounts of nutrients are lost every year due to unsustainable soil management practices. This is partially the result of insufficient use of soil management knowledge. To help bridge the soil information gap in Africa, the Africa Soil Information Service (AfSIS) project was established in 2008. Over the period 2008-2014, the AfSIS project compiled two point data sets: the Africa Soil Profiles (legacy) database and the AfSIS Sentinel Site database. These data sets contain over 28 thousand sampling locations and represent the most comprehensive soil sample data sets of the African continent to date. Utilizing these point data sets in combination with a large number of covariates, we have generated a series of spatial predictions of soil properties relevant to the agricultural management--organic carbon, pH, sand, silt and clay fractions, bulk density, cation-exchange capacity, total nitrogen, exchangeable acidity, Al content and exchangeable bases (Ca, K, Mg, Na). We specifically investigate differences between two predictive approaches: random forests and linear regression. Results of 5-fold cross-validation demonstrate that the random forests algorithm consistently outperforms the linear regression algorithm, with average decreases of 15-75% in Root Mean Squared Error (RMSE) across soil properties and depths. Fitting and running random forests models takes an order of magnitude more time and the modelling success is sensitive to artifacts in the input data, but as long as quality-controlled point data are provided, an increase in soil mapping accuracy can be expected. Results also indicate that globally predicted soil classes (USDA Soil Taxonomy, especially Alfisols and Mollisols) help improve continental scale soil property mapping, and are among the most important predictors. This indicates a promising potential for transferring pedological

  10. Development and Evaluation of Algorithms to Improve Small- and Medium-Size Commercial Building Operations

    SciTech Connect

    Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert G.; Underhill, Ronald M.

    2016-10-31

    Small- and medium-sized (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically utilize packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the U.S. for many reasons, chief among them is to mitigate the climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short-cycling, where an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and lead to premature failure of the compressor or its components. The short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Also, SMBs use a time-of-day scheduling is to start the RTUs before the building will be occupied and shut it off when unoccupied. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this report describes three algorithms for detecting the zone set point temperature, RTU cycling rate and occupancy schedule detection that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using

  11. Bayesian fusion algorithm for improved oscillometric blood pressure estimation.

    PubMed

    Forouzanfar, Mohamad; Dajani, Hilmi R; Groza, Voicu Z; Bolic, Miodrag; Rajan, Sreeraman; Batkin, Izmail

    2016-11-01

    A variety of oscillometric algorithms have been recently proposed in the literature for estimation of blood pressure (BP). However, these algorithms possess specific strengths and weaknesses that should be taken into account before selecting the most appropriate one. In this paper, we propose a fusion method to exploit the advantages of the oscillometric algorithms and circumvent their limitations. The proposed fusion method is based on the computation of the weighted arithmetic mean of the oscillometric algorithms estimates, and the weights are obtained using a Bayesian approach by minimizing the mean square error. The proposed approach is used to fuse four different oscillometric blood pressure estimation algorithms. The performance of the proposed method is evaluated on a pilot dataset of 150 oscillometric recordings from 10 subjects. It is found that the mean error and standard deviation of error are reduced relative to the individual estimation algorithms by up to 7 mmHg and 3 mmHg in estimation of systolic pressure, respectively, and by up to 2 mmHg and 3 mmHg in estimation of diastolic pressure, respectively.

  12. Automated SNP genotype clustering algorithm to improve data completeness in high-throughput SNP genotyping datasets from custom arrays.

    PubMed

    Smith, Edward M; Littrell, Jack; Olivier, Michael

    2007-12-01

    High-throughput SNP genotyping platforms use automated genotype calling algorithms to assign genotypes. While these algorithms work efficiently for individual platforms, they are not compatible with other platforms, and have individual biases that result in missed genotype calls. Here we present data on the use of a second complementary SNP genotype clustering algorithm. The algorithm was originally designed for individual fluorescent SNP genotyping assays, and has been optimized to permit the clustering of large datasets generated from custom-designed Affymetrix SNP panels. In an analysis of data from a 3K array genotyped on 1,560 samples, the additional analysis increased the overall number of genotypes by over 45,000, significantly improving the completeness of the experimental data. This analysis suggests that the use of multiple genotype calling algorithms may be advisable in high-throughput SNP genotyping experiments. The software is written in Perl and is available from the corresponding author.

  13. Improving word recognition in noise among hearing-impaired subjects with a single-channel cochlear noise-reduction algorithm.

    PubMed

    Fink, Nir; Furst, Miriam; Muchnik, Chava

    2012-09-01

    A common complaint of the hearing impaired is the inability to understand speech in noisy environments even with their hearing assistive devices. Only a few single-channel algorithms have significantly improved speech intelligibility in noise for hearing-impaired listeners. The current study introduces a cochlear noise reduction algorithm. It is based on a cochlear representation of acoustic signals and real-time derivation of a binary speech mask. The contribution of the algorithm for enhancing word recognition in noise was evaluated on a group of 42 normal-hearing subjects, 35 hearing-aid users, 8 cochlear implant recipients, and 14 participants with bimodal devices. Recognition scores of Hebrew monosyllabic words embedded in Gaussian noise at several signal-to-noise ratios (SNRs) were obtained with processed and unprocessed signals. The algorithm was not effective among the normal-hearing participants. However, it yielded a significant improvement in some of the hearing-impaired subjects under different listening conditions. Its most impressive benefit appeared among cochlear implant recipients. More than 20% improvement in recognition score of noisy words was obtained by 12, 16, and 26 hearing-impaired at SNR of 30, 24, and 18 dB, respectively. The algorithm has a potential to improve speech intelligibility in background noise, yet further research is required to improve its performances.

  14. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the

  15. Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm.

    PubMed

    Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods.

  16. Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm

    PubMed Central

    Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods. PMID:26649071

  17. A new improved artificial bee colony algorithm for ship hull form optimization

    NASA Astrophysics Data System (ADS)

    Huang, Fuxin; Wang, Lijue; Yang, Chi

    2016-04-01

    The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.

  18. An improved algorithm for labeling connected components in a binary image

    NASA Astrophysics Data System (ADS)

    Yang, Xue D.

    1989-03-01

    In this note, we present an improved algorithm to Schwartz, Sharir and Siegel's algorithm for labeling the connected components of a binary image. Our algorithm uses the same bracket marking mechanisms as is used in the original algorithm to associate equivalent groups. The main improvement of our algorithm is that it reduces the three scans on each line required by the original algorithm in its first pass into only one scan by using a recursive group-boundary dynamic tracking technique, while maintaining the computation on each pixel during scan still a constant time. This algorithm is fast enough to handle images in real time and simple enough to allow an easy and very economical hardware implementation.

  19. On the retrieval of significant wave heights from spaceborne Synthetic Aperture Radar using the Max-Planck Institut algorithm.

    PubMed

    Violante-Carvalho, Nelson

    2005-12-01

    Synthetic Aperture Radar (SAR) onboard satellites is the only source of directional wave spectra with continuous and global coverage. Millions of SAR Wave Mode (SWM) imagettes have been acquired since the launch in the early 1990's of the first European Remote Sensing Satellite ERS-1 and its successors ERS-2 and ENVISAT, which has opened up many possibilities specially for wave data assimilation purposes. The main aim of data assimilation is to improve the forecasting introducing available observations into the modeling procedures in order to minimize the differences between model estimates and measurements. However there are limitations in the retrieval of the directional spectrum from SAR images due to nonlinearities in the mapping mechanism. The Max-Planck Institut (MPI) scheme, the first proposed and most widely used algorithm to retrieve directional wave spectra from SAR images, is employed to compare significant wave heights retrieved from ERS-1 SAR against buoy measurements and against the WAM wave model. It is shown that for periods shorter than 12 seconds the WAM model performs better than the MPI, despite the fact that the model is used as first guess to the MPI method, that is the retrieval is deteriorating the first guess. For periods longer than 12 seconds, the part of the spectrum that is directly measured by SAR, the performance of the MPI scheme is at least as good as the WAM model.

  20. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  1. An improved POCS super-resolution infrared image reconstruction algorithm based on visual mechanism

    NASA Astrophysics Data System (ADS)

    Liu, Jinsong; Dai, Shaosheng; Guo, Zhongyuan; Zhang, Dezhou

    2016-09-01

    The traditional projection onto convex sets (POCS) super-resolution (SR) reconstruction algorithm can only get reconstructed images with poor contrast, low signal-to-noise ratio and blurring edges. In order to solve the above disadvantages, an improved POCS SR infrared image reconstruction algorithm based on visual mechanism is proposed, which introduces data consistency constraint with variable correction thresholds to highlight the target edges and filter out background noises; further, the algorithm introduces contrast constraint considering the resolving ability of human eyes into the traditional algorithm, enhancing the contrast of the image reconstructed adaptively. The experimental results show that the improved POCS algorithm can acquire high quality infrared images whose contrast, average gradient and peak signal to noise ratio are improved many times compared with traditional algorithm.

  2. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    DOE PAGES

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less

  3. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    SciTech Connect

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on the performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.

  4. Crossover Improvement for the Genetic Algorithm in Information Retrieval.

    ERIC Educational Resources Information Center

    Vrajitoru, Dana

    1998-01-01

    In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…

  5. Improvement and analysis of ID3 algorithm in decision-making tree

    NASA Astrophysics Data System (ADS)

    Xie, Xiao-Lan; Long, Zhen; Liao, Wen-Qi

    2015-12-01

    For the cooperative system under development, it needs to use the spatial analysis and relative technology concerning data mining in order to carry out the detection of the subject conflict and redundancy, while the ID3 algorithm is an important data mining. Due to the traditional ID3 algorithm in the decision-making tree towards the log part is rather complicated, this paper obtained a new computational formula of information gain through the optimization of algorithm of the log part. During the experiment contrast and theoretical analysis, it is found that IID3 (Improved ID3 Algorithm) algorithm owns higher calculation efficiency and accuracy and thus worth popularizing.

  6. Motion Cueing Algorithm Modification for Improved Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Zaychik, Kirill; Kelly, Lon C.; Houck, Jacob

    2009-01-01

    Atmospheric turbulence cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. Cardullo and Ellor initially addressed this problem by directly porting the turbulence model output to the motion system. Reid and Robinson addressed the problem by employing a parallel aircraft model, which is only stimulated by the turbulence inputs and adding a filter specially designed to pass the higher turbulence frequencies. There have been advances in motion cueing algorithm development at the Man-Machine Systems Laboratory, at SUNY Binghamton. In particular, the system used to generate turbulence cues has been studied. The Reid approach, implemented by Telban and Cardullo, was employed to augment the optimal motion cueing algorithm installed at the NASA LaRC Simulation Laboratory, driving the Visual Motion Simulator. In this implementation, the output of the primary flight channel was added to the output of the turbulence channel and then sent through a non-linear cueing filter. The cueing filter is an adaptive filter; therefore, it is not desirable for the output of the turbulence channel to be augmented by this type of filter. The likelihood of the signal becoming divergent was also an issue in this design. After testing on-site it became apparent that the architecture of the turbulence algorithm was generating unacceptable cues. As mentioned above, this cueing algorithm comprised a filter that was designed to operate at low bandwidth. Therefore, the turbulence was also filtered, augmenting the cues generated by the model. If any filtering is to be done to the turbulence, it will utilize a filter with a much higher bandwidth, above the frequencies produced by the aircraft response to turbulence. The authors have developed an implementation wherein only the signal from the primary flight channel passes through the nonlinear cueing filter. This paper discusses three

  7. An Improved Marriage in Honey Bees Optimization Algorithm for Single Objective Unconstrained Optimization

    PubMed Central

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416

  8. Protein sequence classification with improved extreme learning machine algorithms.

    PubMed

    Cao, Jiuwen; Xiong, Lianglin

    2014-01-01

    Precisely classifying a protein sequence from a large biological protein sequences database plays an important role for developing competitive pharmacological products. Comparing the unseen sequence with all the identified protein sequences and returning the category index with the highest similarity scored protein, conventional methods are usually time-consuming. Therefore, it is urgent and necessary to build an efficient protein sequence classification system. In this paper, we study the performance of protein sequence classification using SLFNs. The recent efficient extreme learning machine (ELM) and its invariants are utilized as the training algorithms. The optimal pruned ELM is first employed for protein sequence classification in this paper. To further enhance the performance, the ensemble based SLFNs structure is constructed where multiple SLFNs with the same number of hidden nodes and the same activation function are used as ensembles. For each ensemble, the same training algorithm is adopted. The final category index is derived using the majority voting method. Two approaches, namely, the basic ELM and the OP-ELM, are adopted for the ensemble based SLFNs. The performance is analyzed and compared with several existing methods using datasets obtained from the Protein Information Resource center. The experimental results show the priority of the proposed algorithms.

  9. An improved cooperative adaptive cruise control (CACC) algorithm considering invalid communication

    NASA Astrophysics Data System (ADS)

    Wang, Pangwei; Wang, Yunpeng; Yu, Guizhen; Tang, Tieqiao

    2014-05-01

    For the Cooperative Adaptive Cruise Control (CACC) Algorithm, existing research studies mainly focus on how inter-vehicle communication can be used to develop CACC controller, the influence of the communication delays and lags of the actuators to the string stability. However, whether the string stability can be guaranteed when inter-vehicle communication is invalid partially has hardly been considered. This paper presents an improved CACC algorithm based on the sliding mode control theory and analyses the range of CACC controller parameters to maintain string stability. A dynamic model of vehicle spacing deviation in a platoon is then established, and the string stability conditions under improved CACC are analyzed. Unlike the traditional CACC algorithms, the proposed algorithm can ensure the functionality of the CACC system even if inter-vehicle communication is partially invalid. Finally, this paper establishes a platoon of five vehicles to simulate the improved CACC algorithm in MATLAB/Simulink, and the simulation results demonstrate that the improved CACC algorithm can maintain the string stability of a CACC platoon through adjusting the controller parameters and enlarging the spacing to prevent accidents. With guaranteed string stability, the proposed CACC algorithm can prevent oscillation of vehicle spacing and reduce chain collision accidents under real-world circumstances. This research proposes an improved CACC algorithm, which can guarantee the string stability when inter-vehicle communication is invalid.

  10. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation.

    PubMed

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA.

  11. An Effective Hybrid Cuckoo Search Algorithm with Improved Shuffled Frog Leaping Algorithm for 0-1 Knapsack Problems

    PubMed Central

    Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  12. An effective hybrid cuckoo search algorithm with improved shuffled frog leaping algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm.

  13. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  14. Improved genetic algorithm for the protein folding problem by use of a Cartesian combination operator.

    PubMed

    Rabow, A A; Scheraga, H A

    1996-09-01

    We have devised a Cartesian combination operator and coding scheme for improving the performance of genetic algorithms applied to the protein folding problem. The genetic coding consists of the C alpha Cartesian coordinates of the protein chain. The recombination of the genes of the parents is accomplished by: (1) a rigid superposition of one parent chain on the other, to make the relation of Cartesian coordinates meaningful, then, (2) the chains of the children are formed through a linear combination of the coordinates of their parents. The children produced with this Cartesian combination operator scheme have similar topology and retain the long-range contacts of their parents. The new scheme is significantly more efficient than the standard genetic algorithm methods for locating low-energy conformations of proteins. The considerable superiority of genetic algorithms over Monte Carlo optimization methods is also demonstrated. We have also devised a new dynamic programming lattice fitting procedure for use with the Cartesian combination operator method. The procedure finds excellent fits of real-space chains to the lattice while satisfying bond-length, bond-angle, and overlap constraints.

  15. An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming

    2017-02-01

    In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.

  16. Research on target tracking based on improved SURF algorithm and Kalman prediction

    NASA Astrophysics Data System (ADS)

    Hu, Dandan; Nan, Jiang

    2016-07-01

    For the problem of ignoring color information and computing complexity and so on, a new target tracking algorithm based on improved SURF(Speed Up Robust Features) algorithm and Kalman filter fusion is studied. First, the color invariants are added in the generation process of SURF. And then the current position is predicted by using the Kalman filter and establishing the search window. Finally, the feature vectors in the search window are extracted by using the improved SURF algorithm for matching. The experiments prove that the algorithm can always track targets stably when the target appears scale changed, rotation and partial occlusion, and the tracking speed is greatly improved than that of the SURF algorithm.

  17. An Improved Artificial Bee Colony Algorithm for Solving Hybrid Flexible Flowshop With Dynamic Operation Skipping.

    PubMed

    Li, Jun-Qing; Pan, Quan-Ke; Duan, Pei-Yong

    2016-06-01

    In this paper, we propose an improved discrete artificial bee colony (DABC) algorithm to solve the hybrid flexible flowshop scheduling problem with dynamic operation skipping features in molten iron systems. First, each solution is represented by a two-vector-based solution representation, and a dynamic encoding mechanism is developed. Second, a flexible decoding strategy is designed. Next, a right-shift strategy considering the problem characteristics is developed, which can clearly improve the solution quality. In addition, several skipping and scheduling neighborhood structures are presented to balance the exploration and exploitation ability. Finally, an enhanced local search is embedded in the proposed algorithm to further improve the exploitation ability. The proposed algorithm is tested on sets of the instances that are generated based on the realistic production. Through comprehensive computational comparisons and statistical analysis, the highly effective performance of the proposed DABC algorithm is favorably compared against several presented algorithms, both in solution quality and efficiency.

  18. Improving the recommender algorithms with the detected communities in bipartite networks

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Wang, Duo; Xiao, Jinghua

    2017-04-01

    Recommender system offers a powerful tool to make information overload problem well solved and thus gains wide concerns of scholars and engineers. A key challenge is how to make recommendations more accurate and personalized. We notice that community structures widely exist in many real networks, which could significantly affect the recommendation results. By incorporating the information of detected communities in the recommendation algorithms, an improved recommendation approach for the networks with communities is proposed. The approach is examined in both artificial and real networks, the results show that the improvement on accuracy and diversity can be 20% and 7%, respectively. This reveals that it is beneficial to classify the nodes based on the inherent properties in recommender systems.

  19. A Smartphone Application Significantly Improved Diabetes Self-Care Activities with High User Satisfaction

    PubMed Central

    Kim, Yu Jin; Byun, Jong Kyu; Park, So Young; Hong, Soo Min; Chin, Sang Ouk; Chon, Suk; Oh, Seungjoon; Woo, Jeong-taek; Kim, Sung Woon; Kim, Young Seol

    2015-01-01

    Background We developed for the first time a smartphone application designed for diabetes self-management in Korea and registered a patent for the relevant algorithm. We also investigated the user satisfaction with the application and the change in diabetes related self-care activities after using the application. Methods We conducted a questionnaire survey on volunteers with diabetes who were using the application. Ninety subjects responded to the questionnaire between June 2012 and March 2013. A modified version of the Summary of Diabetes Self-Care Activities (SDSCA) was used in this study. Results The survey results exhibited a mean subject age of 44.0 years old, and males accounted for 78.9% of the subjects. Fifty percent of the subjects had diabetes for less than 3 years. The majority of respondents experienced positive changes in their clinical course after using the application (83.1%) and were satisfied with the structure and completeness of the application (86.7%). Additionally, the respondents' answers indicated that the application was easy to use (96.7%) and recommendable to others (97.7%) and that they would continue using the application to manage their diabetes (96.7%). After using the Diabetes Notepad application, diabetes related self-care activities assessed by SDSCA displayed statistically significant improvements (P<0.05), except for the number of days of drinking. Conclusion This smartphone-based application can be a useful tool leading to positive changes in diabetes related self-care activities and increase user satisfaction. PMID:26124991

  20. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion

    PubMed Central

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  1. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.

    PubMed

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-08-31

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.

  2. Improved noise-immune phase-unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Cusack, R.; Huntley, J. M.; Goldrein, H. T.

    1995-02-01

    An algorithm for unwrapping noisy phase maps has recently been proposed, based on the identification of discontinuity sources that mark the start or end of a 2 pi phase discontinuity. Branch cuts between sources act as barriers to unwrapping, resulting in a unique phase map that is independent of the unwrapping route. We investigate four methods for optimizing the placement of the cuts. A modified nearest neighbor approach is found to be the most successful and can reliably unwrap unfiltered speckle-interferometry phase maps with discontinuity source densities of 0.05 sources pixel-1.

  3. Improving synthetical stellar libraries using the cross-entropy algorithm

    NASA Astrophysics Data System (ADS)

    Martins, L. P.; Vitoriano, R.; Coelho, P.; Caproni, A.

    Stellar libraries are fundamental tools for the study of stellar populations since they are one of the fundamental ingredients for stellar population synthesis codes. We have implemented an innovative method to perform the calibration of atomic line lists used to generate the synthetic spectra of theoretical libraries, much more robust and efficient than the methods so far used. Here we present the adaptation and validation of this method, called Cross-Entropy algorithm, to the calibration of atomic line list. We show that the method is extremely efficient for calibration of atomic line lists when the transition contributes with at least 10^{-4} of the continuum flux.

  4. Improved Diagnostic Validity of the ADOS Revised Algorithms: A Replication Study in an Independent Sample

    ERIC Educational Resources Information Center

    Oosterling, Iris; Roos, Sascha; de Bildt, Annelies; Rommelse, Nanda; de Jonge, Maretha; Visser, Janne; Lappenschaar, Martijn; Swinkels, Sophie; van der Gaag, Rutger Jan; Buitelaar, Jan

    2010-01-01

    Recently, Gotham et al. ("2007") proposed revised algorithms for the Autism Diagnostic Observation Schedule (ADOS) with improved diagnostic validity. The aim of the current study was to replicate predictive validity, factor structure, and correlations with age and verbal and nonverbal IQ of the ADOS revised algorithms for Modules 1 and 2…

  5. An algorithm for finding biologically significant features in microarray data based on a priori manifold learning.

    PubMed

    Hira, Zena M; Trigeorgis, George; Gillies, Duncan F

    2014-01-01

    Microarray databases are a large source of genetic data, which, upon proper analysis, could enhance our understanding of biology and medicine. Many microarray experiments have been designed to investigate the genetic mechanisms of cancer, and analytical approaches have been applied in order to classify different types of cancer or distinguish between cancerous and non-cancerous tissue. However, microarrays are high-dimensional datasets with high levels of noise and this causes problems when using machine learning methods. A popular approach to this problem is to search for a set of features that will simplify the structure and to some degree remove the noise from the data. The most widely used approach to feature extraction is principal component analysis (PCA) which assumes a multivariate Gaussian model of the data. More recently, non-linear methods have been investigated. Among these, manifold learning algorithms, for example Isomap, aim to project the data from a higher dimensional space onto a lower dimension one. We have proposed a priori manifold learning for finding a manifold in which a representative set of microarray data is fused with relevant data taken from the KEGG pathway database. Once the manifold has been constructed the raw microarray data is projected onto it and clustering and classification can take place. In contrast to earlier fusion based methods, the prior knowledge from the KEGG databases is not used in, and does not bias the classification process--it merely acts as an aid to find the best space in which to search the data. In our experiments we have found that using our new manifold method gives better classification results than using either PCA or conventional Isomap.

  6. Improved dynamic-programming-based algorithms for segmentation of masses in mammograms

    SciTech Connect

    Dominguez, Alfonso Rojas; Nandi, Asoke K.

    2007-11-15

    In this paper, two new boundary tracing algorithms for segmentation of breast masses are presented. These new algorithms are based on the dynamic programming-based boundary tracing (DPBT) algorithm proposed in Timp and Karssemeijer, [S. Timp and N. Karssemeijer, Med. Phys. 31, 958-971 (2004)] The DPBT algorithm contains two main steps: (1) construction of a local cost function, and (2) application of dynamic programming to the selection of the optimal boundary based on the local cost function. The validity of some assumptions used in the design of the DPBT algorithm is tested in this paper using a set of 349 mammographic images. Based on the results of the tests, modifications to the computation of the local cost function have been designed and have resulted in the Improved-DPBT (IDPBT) algorithm. A procedure for the dynamic selection of the strength of the components of the local cost function is presented that makes these parameters independent of the image dataset. Incorporation of this dynamic selection procedure has produced another new algorithm which we have called ID{sup 2}PBT. Methods for the determination of some other parameters of the DPBT algorithm that were not covered in the original paper are presented as well. The merits of the new IDPBT and ID{sup 2}PBT algorithms are demonstrated experimentally by comparison against the DPBT algorithm. The segmentation results are evaluated with base on the area overlap measure and other segmentation metrics. Both of the new algorithms outperform the original DPBT; the improvements in the algorithms performance are more noticeable around the values of the segmentation metrics corresponding to the highest segmentation accuracy, i.e., the new algorithms produce more optimally segmented regions, rather than a pronounced increase in the average quality of all the segmented regions.

  7. An improved label propagation algorithm based on the similarity matrix using random walk

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-Kun; Song, Chen; Jia, Jia; Lu, Zeng-Lei; Zhang, Qian

    2016-05-01

    Community detection based on label propagation algorithm (LPA) has attracted widespread concern because of its high efficiency. But it is difficult to guarantee the accuracy of community detection as the label spreading is random in the algorithm. In response to the problem, an improved LPA based on random walk (RWLPA) is proposed in this paper. Firstly, a matrix measuring similarity among various nodes in the network is obtained through calculation. Secondly, during the process of label propagation, when a node has more than a neighbor label with the highest frequency, not the label of a random neighbor but the label of the neighbor with the highest similarity will be chosen to update. It can avoid label propagating randomly among communities. Finally, we test LPA and the improved LPA in benchmark networks and real-world networks. The results show that the quality of communities discovered by the improved algorithm is improved compared with the traditional algorithm.

  8. An improved method for Daugman's iris localization algorithm.

    PubMed

    Ren, Xinying; Peng, Zhiyong; Zeng, Qingning; Peng, Chaonan; Zhang, Jianhua; Wu, Shuicai; Zeng, Yanjun

    2008-01-01

    Computer-based automatic recognition of persons for security reasons is highly desirable. Iris patterns provide an opportunity for separation of individuals to an extent that would avoid false positives and negatives. The current standard for this science is Daugman's iris localization algorithm. Part of the time required for analysis and comparison with other images relates to eyelid and eyelash positioning and length. We sought to remove the upper and lower eyelids and eyelashes to determine if separation of individuals could still be attained. Our experiments suggest separation can be achieved as effectively and more quickly by removing distracting and variable features while retaining enough stable factors in the iris to enable accurate identification.

  9. Algorithms to Improve the Prediction of Postprandial Insulinaemia in Response to Common Foods

    PubMed Central

    Bell, Kirstine J.; Petocz, Peter; Colagiuri, Stephen; Brand-Miller, Jennie C.

    2016-01-01

    Dietary patterns that induce excessive insulin secretion may contribute to worsening insulin resistance and beta-cell dysfunction. Our aim was to generate mathematical algorithms to improve the prediction of postprandial glycaemia and insulinaemia for foods of known nutrient composition, glycemic index (GI) and glycemic load (GL). We used an expanded database of food insulin index (FII) values generated by testing 1000 kJ portions of 147 common foods relative to a reference food in lean, young, healthy volunteers. Simple and multiple linear regression analyses were applied to validate previously generated equations for predicting insulinaemia, and develop improved predictive models. Large differences in insulinaemic responses within and between food groups were evident. GL, GI and available carbohydrate content were the strongest predictors of the FII, explaining 55%, 51% and 47% of variation respectively. Fat, protein and sugar were significant but relatively weak predictors, accounting for only 31%, 7% and 13% of the variation respectively. Nutritional composition alone explained only 50% of variability. The best algorithm included a measure of glycemic response, sugar and protein content and explained 78% of variation. Knowledge of the GI or glycaemic response to 1000 kJ portions together with nutrient composition therefore provides a good approximation for ranking of foods according to their “insulin demand”. PMID:27070641

  10. A Genetic Algorithm with the Improved 2-opt Method for Quadratic Assignment Problem

    NASA Astrophysics Data System (ADS)

    Matayoshi, Mitsukuni; Nakamura, Morikazu; Miyagi, Hayao

    We propose a new 2-opt base method as a local search approach used with Genetic Algorithms (GAs) in Memetic Algorithm. We got a hint from the fast 2-opt method and devised the new 2-opt method. The main different point is such that our method exchanges genes by using histories of contributions to fitness value improvement. The contribution level is represented by the value `Priority’. In computer experiment, Quadratic Assignment Problem (QAP) instances are solved by GA with the 2-opt method(First Admissible Move Strategy, the Best Admissible Move Strategy), the fast 2-opt, and our proposed method for comparative evaluation. The results showed that our improved method obtained better solutions at ealier generation of the GA and our method required less computation time than the others at some upper bound value of appropriate `Priority’ setting values. Specially, at the average elapsed time of the fast 2-opt method’s 1000th generation, the exact solution findings of ours is more than the others. In further experiment, we observe that the searching capability depends on the number of levels of `Priority’. The ratio between two different Priority level sets becomes 1.59 in computation time in solving problem instance “char25a". This characteristic is shown to be statistically significant in ten instances among eleven.

  11. Improved Exact Enumerative Algorithms for the Planted (l, d)-Motif Search Problem.

    PubMed

    Tanaka, Shunji

    2014-01-01

    In this paper efficient exact algorithms are proposed for the planted ( l, d)-motif search problem. This problem is to find all motifs of length l that are planted in each input string with at most d mismatches. The "quorum" version of this problem is also treated in this paper to find motifs planted not in all input strings but in at least q input strings. The proposed algorithms are based on the previous algorithms called qPMSPruneI and qPMS7 that traverse a search tree starting from a l-length substring of an input string. To improve these previous algorithms, several techniques are introduced, which contribute to reducing the computation time for the traversal. In computational experiments, it will be shown that the proposed algorithms outperform the previous algorithms.

  12. An improved filter-u least mean square vibration control algorithm for aircraft framework.

    PubMed

    Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu

    2014-09-01

    Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance.

  13. FRESCO+: an improved O2 A-band cloud retrieval algorithm for tropospheric trace gas retrievals

    NASA Astrophysics Data System (ADS)

    Wang, P.; Stammes, P.; van der A, R.; Pinardi, G.; van Roozendael, M.

    2008-11-01

    The FRESCO (Fast Retrieval Scheme for Clouds from the Oxygen A-band) algorithm has been used to retrieve cloud information from measurements of the O2 A-band around 760 nm by GOME, SCIAMACHY and GOME-2. The cloud parameters retrieved by FRESCO are the effective cloud fraction and cloud pressure, which are used for cloud correction in the retrieval of trace gases like O3 and NO2. To improve the cloud pressure retrieval for partly cloudy scenes, single Rayleigh scattering has been included in an improved version of the algorithm, called FRESCO+. We compared FRESCO+ and FRESCO effective cloud fractions and cloud pressures using simulated spectra and one month of GOME measured spectra. As expected, FRESCO+ gives more reliable cloud pressures over partly cloudy pixels. Simulations and comparisons with ground-based radar/lidar measurements of clouds show that the FRESCO+ cloud pressure is about the optical midlevel of the cloud. Globally averaged, the FRESCO+ cloud pressure is about 50 hPa higher than the FRESCO cloud pressure, while the FRESCO+ effective cloud fraction is about 0.01 larger. The effect of FRESCO+ cloud parameters on O3 and NO2 vertical column density (VCD) retrievals is studied using SCIAMACHY data and ground-based DOAS measurements. We find that the FRESCO+ algorithm has a significant effect on tropospheric NO2 retrievals but a minor effect on total O3 retrievals. The retrieved SCIAMACHY tropospheric NO2 VCDs using FRESCO+ cloud parameters (v1.1) are lower than the tropospheric NO2VCDs which used FRESCO cloud parameters (v1.04), in particular over heavily polluted areas with low clouds. The difference between SCIAMACHY tropospheric NO2 VCDs v1.1 and ground-based MAXDOAS measurements performed in Cabauw, The Netherlands, during the DANDELIONS campaign is about -2.12×1014molec cm-2.

  14. Research on WNN modeling for gold price forecasting based on improved artificial bee colony algorithm.

    PubMed

    Li, Bai

    2014-01-01

    Gold price forecasting has been a hot issue in economics recently. In this work, wavelet neural network (WNN) combined with a novel artificial bee colony (ABC) algorithm is proposed for this gold price forecasting issue. In this improved algorithm, the conventional roulette selection strategy is discarded. Besides, the convergence statuses in a previous cycle of iteration are fully utilized as feedback messages to manipulate the searching intensity in a subsequent cycle. Experimental results confirm that this new algorithm converges faster than the conventional ABC when tested on some classical benchmark functions and is effective to improve modeling capacity of WNN regarding the gold price forecasting scheme.

  15. Ensemble of classifiers to improve accuracy of the CLIP4 machine-learning algorithm

    NASA Astrophysics Data System (ADS)

    Kurgan, Lukasz; Cios, Krzysztof J.

    2002-03-01

    Machine learning, one of the data mining and knowledge discovery tools, addresses automated extraction of knowledge from data, expressed in the form of production rules. The paper describes a method for improving accuracy of rules generated by inductive machine learning algorithm by generating the ensemble of classifiers. It generates multiple classifiers using the CLIP4 algorithm and combines them using a voting scheme. The generation of a set of different classifiers is performed by injecting controlled randomness into the learning algorithm, but without modifying the training data set. Our method is based on the characteristic properties of the CLIP4 algorithm. The case study of the SPECT heart image analysis system is used as an example where improving accuracy is very important. Benchmarking results on other well-known machine learning datasets, and comparison with an algorithm that uses boosting technique to improve its accuracy are also presented. The proposed method always improves the accuracy of the results when compared with the accuracy of a single classifier generated by the CLIP4 algorithm, as opposed to using boosting. The obtained results are comparable with other state-of-the-art machine learning algorithms.

  16. Improved algorithms and coupled neutron-photon transport for auto-importance sampling method

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Li, Jun-Li; Wu, Zhen; Qiu, Rui; Li, Chun-Yan; Liang, Man-Chun; Zhang, Hui; Gang, Zhi; Xu, Hong

    2017-01-01

    The Auto-Importance Sampling (AIS) method is a Monte Carlo variance reduction technique proposed for deep penetration problems, which can significantly improve computational efficiency without pre-calculations for importance distribution. However, the AIS method is only validated with several simple examples, and cannot be used for coupled neutron-photon transport. This paper presents improved algorithms for the AIS method, including particle transport, fictitious particle creation and adjustment, fictitious surface geometry, random number allocation and calculation of the estimated relative error. These improvements allow the AIS method to be applied to complicated deep penetration problems with complex geometry and multiple materials. A Completely coupled Neutron-Photon Auto-Importance Sampling (CNP-AIS) method is proposed to solve the deep penetration problems of coupled neutron-photon transport using the improved algorithms. The NUREG/CR-6115 PWR benchmark was calculated by using the methods of CNP-AIS, geometry splitting with Russian roulette and analog Monte Carlo, respectively. The calculation results of CNP-AIS are in good agreement with those of geometry splitting with Russian roulette and the benchmark solutions. The computational efficiency of CNP-AIS for both neutron and photon is much better than that of geometry splitting with Russian roulette in most cases, and increased by several orders of magnitude compared with that of the analog Monte Carlo. Supported by the subject of National Science and Technology Major Project of China (2013ZX06002001-007, 2011ZX06004-007) and National Natural Science Foundation of China (11275110, 11375103)

  17. An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm

    NASA Astrophysics Data System (ADS)

    Jacques, Robert; McNutt, Todd

    2014-03-01

    Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.

  18. Image Compression Algorithm Altered to Improve Stereo Ranging

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    2008-01-01

    A report discusses a modification of the ICER image-data-compression algorithm to increase the accuracy of ranging computations performed on compressed stereoscopic image pairs captured by cameras aboard the Mars Exploration Rovers. (ICER and variants thereof were discussed in several prior NASA Tech Briefs articles.) Like many image compressors, ICER was designed to minimize a mean-square-error measure of distortion in reconstructed images as a function of the compressed data volume. The present modification of ICER was preceded by formulation of an alternative error measure, an image-quality metric that focuses on stereoscopic-ranging quality and takes account of image-processing steps in the stereoscopic-ranging process. This metric was used in empirical evaluation of bit planes of wavelet-transform subbands that are generated in ICER. The present modification, which is a change in a bit-plane prioritization rule in ICER, was adopted on the basis of this evaluation. This modification changes the order in which image data are encoded, such that when ICER is used for lossy compression, better stereoscopic-ranging results are obtained as a function of the compressed data volume.

  19. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  20. Evaluation of breast cancer susceptibility using improved genetic algorithms to generate genotype SNP barcodes.

    PubMed

    Yang, Cheng-Hong; Lin, Yu-Da; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2013-01-01

    Genetic association is a challenging task for the identification and characterization of genes that increase the susceptibility to common complex multifactorial diseases. To fully execute genetic studies of complex diseases, modern geneticists face the challenge of detecting interactions between loci. A genetic algorithm (GA) is developed to detect the association of genotype frequencies of cancer cases and noncancer cases based on statistical analysis. An improved genetic algorithm (IGA) is proposed to improve the reliability of the GA method for high-dimensional SNP-SNP interactions. The strategy offers the top five results to the random population process, in which they guide the GA toward a significant search course. The IGA increases the likelihood of quickly detecting the maximum ratio difference between cancer cases and noncancer cases. The study systematically evaluates the joint effect of 23 SNP combinations of six steroid hormone metabolisms, and signaling-related genes involved in breast carcinogenesis pathways were systematically evaluated, with IGA successfully detecting significant ratio differences between breast cancer cases and noncancer cases. The possible breast cancer risks were subsequently analyzed by odds-ratio (OR) and risk-ratio analysis. The estimated OR of the best SNP barcode is significantly higher than 1 (between 1.15 and 7.01) for specific combinations of two to 13 SNPs. Analysis results support that the IGA provides higher ratio difference values than the GA between breast cancer cases and noncancer cases over 3-SNP to 13-SNP interactions. A more specific SNP-SNP interaction profile for the risk of breast cancer is also provided.

  1. Using the Significant Learning Taxonomy and Active Learning to Improve Accounting Education

    ERIC Educational Resources Information Center

    Killian, Larita J.; Brandon, Christopher D.

    2009-01-01

    Like other members of the academy, accounting professors are challenged to improve student learning. We must help students move beyond the "bean counter" role and develop higher-level skills such as analysis, synthesis, and problem-solving. The Significant Learning Taxonomy was used as a template to improve learning in an introductory accounting…

  2. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  3. Detection algorithm of infrared small target based on improved SUSAN operator

    NASA Astrophysics Data System (ADS)

    Liu, Xingmiao; Wang, Shicheng; Zhao, Jing

    2010-10-01

    The methods of detecting small moving targets in infrared image sequences that contain moving nuisance objects and background noise is analyzed in this paper. A novel infrared small target detection algorithm based on improved SUSAN operator is put forward. The algorithm selects double templates for the infrared small target detection: one size is greater than the small target point size and another size is equal to the small target point size. First, the algorithm uses the big template to calculate the USAN of each pixel in the image and detect the small target, the edge of the image and isolated noise pixels; Then the algorithm uses the another template to calculate the USAN of pixels detected in the first step and improves the principles of SUSAN algorithm based on the characteristics of the small target so that the algorithm can only detect small targets and don't sensitive to the edge pixels of the image and isolated noise pixels. So the interference of the edge of the image and isolate noise points are removed and the candidate target points can be identified; At last, the target is detected by utilizing the continuity and consistency of target movement. The experimental results indicate that the improved SUSAN detection algorithm can quickly and effectively detect the infrared small targets.

  4. An improved fuzzy c-means clustering algorithm based on shadowed sets and PSO.

    PubMed

    Zhang, Jian; Shen, Ling

    2014-01-01

    To organize the wide variety of data sets automatically and acquire accurate classification, this paper presents a modified fuzzy c-means algorithm (SP-FCM) based on particle swarm optimization (PSO) and shadowed sets to perform feature clustering. SP-FCM introduces the global search property of PSO to deal with the problem of premature convergence of conventional fuzzy clustering, utilizes vagueness balance property of shadowed sets to handle overlapping among clusters, and models uncertainty in class boundaries. This new method uses Xie-Beni index as cluster validity and automatically finds the optimal cluster number within a specific range with cluster partitions that provide compact and well-separated clusters. Experiments show that the proposed approach significantly improves the clustering effect.

  5. Application of Genetic Algorithm and Particle Swarm Optimization techniques for improved image steganography systems

    NASA Astrophysics Data System (ADS)

    Jude Hemanth, Duraisamy; Umamaheswari, Subramaniyan; Popescu, Daniela Elena; Naaji, Antoanela

    2016-01-01

    Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT) and Finite Ridgelet Transform (FRIT) are used in combination with GA and PSO to improve the efficiency of the image steganography system.

  6. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-08-05

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms.

  7. AerGOM, an improved algorithm for stratospheric aerosol extinction retrieval from GOMOS observations - Part 1: Algorithm description

    NASA Astrophysics Data System (ADS)

    Vanhellemont, Filip; Mateshvili, Nina; Blanot, Laurent; Étienne Robert, Charles; Bingen, Christine; Sofieva, Viktoria; Dalaudier, Francis; Tétard, Cédric; Fussen, Didier; Dekemper, Emmanuel; Kyrölä, Erkki; Laine, Marko; Tamminen, Johanna; Zehner, Claus

    2016-09-01

    The GOMOS instrument on Envisat has successfully demonstrated that a UV-Vis-NIR spaceborne stellar occultation instrument is capable of delivering quality data on the gaseous and particulate composition of Earth's atmosphere. Still, some problems related to data inversion remained to be examined. In the past, it was found that the aerosol extinction profile retrievals in the upper troposphere and stratosphere are of good quality at a reference wavelength of 500 nm but suffer from anomalous, retrieval-related perturbations at other wavelengths. Identification of algorithmic problems and subsequent improvement was therefore necessary. This work has been carried out; the resulting AerGOM Level 2 retrieval algorithm together with the first data version AerGOMv1.0 forms the subject of this paper. The AerGOM algorithm differs from the standard GOMOS IPF processor in a number of important ways: more accurate physical laws have been implemented, all retrieval-related covariances are taken into account, and the aerosol extinction spectral model is strongly improved. Retrieval examples demonstrate that the previously observed profile perturbations have disappeared, and the obtained extinction spectra look in general more consistent. We present a detailed validation study in a companion paper; here, to give a first idea of the data quality, a worst-case comparison at 386 nm shows SAGE II-AerGOM correlation coefficients that are up to 1 order of magnitude larger than the ones obtained with the GOMOS IPFv6.01 data set.

  8. IMPROVED ALGORITHMS FOR RADAR-BASED RECONSTRUCTION OF ASTEROID SHAPES

    SciTech Connect

    Greenberg, Adam H.; Margot, Jean-Luc

    2015-10-15

    We describe our implementation of a global-parameter optimizer and Square Root Information Filter into the asteroid-modeling software shape. We compare the performance of our new optimizer with that of the existing sequential optimizer when operating on various forms of simulated data and actual asteroid radar data. In all cases, the new implementation performs substantially better than its predecessor: it converges faster, produces shape models that are more accurate, and solves for spin axis orientations more reliably. We discuss potential future changes to improve shape's fitting speed and accuracy.

  9. Astronomical image denoising by means of improved adaptive backtracking-based matching pursuit algorithm.

    PubMed

    Liu, Qianshun; Bai, Jian; Yu, Feihong

    2014-11-10

    In an effort to improve compressive sensing and spare signal reconstruction by way of the backtracking-based adaptive orthogonal matching pursuit (BAOMP), a new sparse coding algorithm called improved adaptive backtracking-based OMP (ABOMP) is proposed in this study. Many aspects have been improved compared to the original BAOMP method, including replacing the fixed threshold with an adaptive one, adding residual feedback and support set verification, and others. Because of these ameliorations, the proposed algorithm can more precisely choose the atoms. By adding the adaptive step-size mechanism, it requires much less iteration and thus executes more efficiently. Additionally, a simple but effective contrast enhancement method is also adopted to further improve the denoising results and visual effect. By combining the IABOMP algorithm with the state-of-art dictionary learning algorithm K-SVD, the proposed algorithm achieves better denoising effects for astronomical images. Numerous experimental results show that the proposed algorithm performs successfully and effectively on Gaussian and Poisson noise removal.

  10. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method.

  11. An improved image compression algorithm using binary space partition scheme and geometric wavelets.

    PubMed

    Chopra, Garima; Pal, A K

    2011-01-01

    Geometric wavelet is a recent development in the field of multivariate nonlinear piecewise polynomials approximation. The present study improves the geometric wavelet (GW) image coding method by using the slope intercept representation of the straight line in the binary space partition scheme. The performance of the proposed algorithm is compared with the wavelet transform-based compression methods such as the embedded zerotree wavelet (EZW), the set partitioning in hierarchical trees (SPIHT) and the embedded block coding with optimized truncation (EBCOT), and other recently developed "sparse geometric representation" based compression algorithms. The proposed image compression algorithm outperforms the EZW, the Bandelets and the GW algorithm. The presented algorithm reports a gain of 0.22 dB over the GW method at the compression ratio of 64 for the Cameraman test image.

  12. Visual Tracking Based on an Improved Online Multiple Instance Learning Algorithm.

    PubMed

    Wang, Li Jia; Zhang, Hua

    2016-01-01

    An improved online multiple instance learning (IMIL) for a visual tracking algorithm is proposed. In the IMIL algorithm, the importance of each instance contributing to a bag probability is with respect to their probabilities. A selection strategy based on an inner product is presented to choose weak classifier from a classifier pool, which avoids computing instance probabilities and bag probability M times. Furthermore, a feedback strategy is presented to update weak classifiers. In the feedback update strategy, different weights are assigned to the tracking result and template according to the maximum classifier score. Finally, the presented algorithm is compared with other state-of-the-art algorithms. The experimental results demonstrate that the proposed tracking algorithm runs in real-time and is robust to occlusion and appearance changes.

  13. Further development of image processing algorithms to improve detectability of defects in Sonic IR NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2017-02-01

    Sonic Infrared imaging (SIR) technology is a relatively new NDE technique that has received significant acceptance in the NDE community. SIR NDE is a super-fast, wide range NDE method. The technology uses short pulses of ultrasonic excitation together with infrared imaging to detect defects in the structures under inspection. Defects become visible to the IR camera when the temperature in the crack vicinity increases due to various heating mechanisms in the specimen. Defect detection is highly affected by noise levels as well as mode patterns in the image. Mode patterns result from the superposition of sonic waves interfering within the specimen during the application of sound pulse. Mode patterns can be a serious concern, especially in composite structures. Mode patterns can either mimic real defects in the specimen, or alternatively, hide defects if they overlap. In last year's QNDE, we have presented algorithms to improve defects detectability in severe noise. In this paper, we will present our development of algorithms on defect extraction targeting specifically to mode patterns in SIR images.

  14. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  15. An improved recommendation algorithm via weakening indirect linkage effect

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Qiu, Tian; Shen, Xiao-Quan

    2015-07-01

    We propose an indirect-link-weakened mass diffusion method (IMD), by considering the indirect linkage and the source object heterogeneity effect in the mass diffusion (MD) recommendation method. Experimental results on the MovieLens, Netflix, and RYM datasets show that, the IMD method greatly improves both the recommendation accuracy and diversity, compared with a heterogeneity-weakened MD method (HMD), which only considers the source object heterogeneity. Moreover, the recommendation accuracy of the cold objects is also better elevated in the IMD than the HMD method. It suggests that eliminating the redundancy induced by the indirect linkages could have a prominent effect on the recommendation efficiency in the MD method. Project supported by the National Natural Science Foundation of China (Grant No. 11175079) and the Young Scientist Training Project of Jiangxi Province, China (Grant No. 20133BCB23017).

  16. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  17. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  18. An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments

    PubMed Central

    Yang, Yifei; Tan, Minjia; Dai, Yuewei

    2017-01-01

    A ship power equipments’ fault monitoring signal usually provides few samples and the data’s feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments. PMID:28182678

  19. A strictly improving Phase 1 algorithm using least-squares subproblems

    SciTech Connect

    Leichner, S.A.; Dantzig, G.B.; Davis, J.W.

    1992-04-01

    Although the simplex method`s performance in solving linear programming problems is usually quite good, it does not guarantee strict improvement at each iteration on degenerate problems. Instead of trying to recognize and avoid degenerate steps in the simplex method, we have developed a new Phase I algorithm that is completely impervious to degeneracy, with strict improvement attained at each iteration. It is also noted that the new Phase I algorithm is closely related to a number of existing algorithms. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase I algorithm were almost 3.5 times faster than the simplex method; on some problems, it was over 10 times faster.

  20. A strictly improving Phase 1 algorithm using least-squares subproblems

    SciTech Connect

    Leichner, S.A.; Dantzig, G.B.; Davis, J.W.

    1992-04-01

    Although the simplex method's performance in solving linear programming problems is usually quite good, it does not guarantee strict improvement at each iteration on degenerate problems. Instead of trying to recognize and avoid degenerate steps in the simplex method, we have developed a new Phase I algorithm that is completely impervious to degeneracy, with strict improvement attained at each iteration. It is also noted that the new Phase I algorithm is closely related to a number of existing algorithms. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase I algorithm were almost 3.5 times faster than the simplex method; on some problems, it was over 10 times faster.

  1. Infrared point target detection based on exponentially weighted RLS algorithm and dual solution improvement

    NASA Astrophysics Data System (ADS)

    Zhu, Bin; Fan, Xiang; Ma, Dong-hui; Cheng, Zheng-dong

    2009-07-01

    The desire to maximize target detection range focuses attention on algorithms for detecting and tracking point targets. However, point target detection and tracking is a challenging task for two difficulties: the one is targets occupying only a few pixels or less in the complex noise and background clutter; the other is the requirement of computational load for real-time applications. Temporal signal processing algorithms offer superior clutter rejection to that of the standard spatial processing approaches. In this paper, the traditional single frame algorithm based on the background prediction is improved to consecutive multi-frames exponentially weighted recursive least squared (EWRLS) algorithm. Farther, the dual solution of EWRLS (DEWLS) is deduced to reduce the computational burden. DEWLS algorithm only uses the inner product of the points pair in training set. The predict result is given directly without compute any middle variable. Experimental results show that the RLS filter can largely increase the signal to noise ratio (SNR) of images; it has the best detection performance than other mentioned algorithms; moving targets can be detected within 2 or 3 frames with lower false alarm. Moreover, whit the dual solution improvement, the computational efficiency is enhanced over 41% to the EWRLS algorithm.

  2. Improved Algorithm for Analysis of DNA Sequences Using Multiresolution Transformation

    PubMed Central

    Inbamalar, T. M.; Sivakumar, R.

    2015-01-01

    Bioinformatics and genomic signal processing use computational techniques to solve various biological problems. They aim to study the information allied with genetic materials such as the deoxyribonucleic acid (DNA), the ribonucleic acid (RNA), and the proteins. Fast and precise identification of the protein coding regions in DNA sequence is one of the most important tasks in analysis. Existing digital signal processing (DSP) methods provide less accurate and computationally complex solution with greater background noise. Hence, improvements in accuracy, computational complexity, and reduction in background noise are essential in identification of the protein coding regions in the DNA sequences. In this paper, a new DSP based method is introduced to detect the protein coding regions in DNA sequences. Here, the DNA sequences are converted into numeric sequences using electron ion interaction potential (EIIP) representation. Then discrete wavelet transformation is taken. Absolute value of the energy is found followed by proper threshold. The test is conducted using the data bases available in the National Centre for Biotechnology Information (NCBI) site. The comparative analysis is done and it ensures the efficiency of the proposed system. PMID:26000337

  3. EMMA: an efficient massive mapping algorithm using improved approximate mapping filtering.

    PubMed

    Zhang, Xin; Cao, Zhi-Wei; Lin, Zhi-Xin; Wang, Qing-Kang; Li, Yi-Xue

    2006-12-01

    Efficient massive mapping algorithm (EMMA), an algorithm on efficiently mapping massive cDNAs onto genomic sequences, has recently been developed. The process of mapping massive cDNAs onto genomic sequences has been improved using more approximate mapping filtering based on an enhanced suffix array coupled with a pruned fast hash table, algorithms of block alignment extensions, and k-longest paths. When compared with the classical BLAT software in this field, the computing of EMMA ranges from two to forty-one times faster under similar prediction precisions.

  4. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem

    PubMed Central

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA. PMID:26167171

  5. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem.

    PubMed

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA.

  6. Improvement of wavelet threshold filtered back-projection image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-11-01

    Image reconstruction technique has been applied into many fields including some medical imaging, such as X ray computer tomography (X-CT), positron emission tomography (PET) and nuclear magnetic resonance imaging (MRI) etc, but the reconstructed effects are still not satisfied because original projection data are inevitably polluted by noises in process of image reconstruction. Although some traditional filters e.g., Shepp-Logan (SL) and Ram-Lak (RL) filter have the ability to filter some noises, Gibbs oscillation phenomenon are generated and artifacts leaded by back-projection are not greatly improved. Wavelet threshold denoising can overcome the noises interference to image reconstruction. Since some inherent defects exist in the traditional soft and hard threshold functions, an improved wavelet threshold function combined with filtered back-projection (FBP) algorithm was proposed in this paper. Four different reconstruction algorithms were compared in simulated experiments. Experimental results demonstrated that this improved algorithm greatly eliminated the shortcomings of un-continuity and large distortion of traditional threshold functions and the Gibbs oscillation. Finally, the availability of this improved algorithm was verified from the comparison of two evaluation criterions, i.e. mean square error (MSE), peak signal to noise ratio (PSNR) among four different algorithms, and the optimum dual threshold values of improved wavelet threshold function was gotten.

  7. Improved Fault Classification in Series Compensated Transmission Line: Comparative Evaluation of Chebyshev Neural Network Training Algorithms.

    PubMed

    Vyas, Bhargav Y; Das, Biswarup; Maheshwari, Rudra Prakash

    2016-08-01

    This paper presents the Chebyshev neural network (ChNN) as an improved artificial intelligence technique for power system protection studies and examines the performances of two ChNN learning algorithms for fault classification of series compensated transmission line. The training algorithms are least-square Levenberg-Marquardt (LSLM) and recursive least-square algorithm with forgetting factor (RLSFF). The performances of these algorithms are assessed based on their generalization capability in relating the fault current parameters with an event of fault in the transmission line. The proposed algorithm is fast in response as it utilizes postfault samples of three phase currents measured at the relaying end corresponding to half-cycle duration only. After being trained with only a small part of the generated fault data, the algorithms have been tested over a large number of fault cases with wide variation of system and fault parameters. Based on the studies carried out in this paper, it has been found that although the RLSFF algorithm is faster for training the ChNN in the fault classification application for series compensated transmission lines, the LSLM algorithm has the best accuracy in testing. The results prove that the proposed ChNN-based method is accurate, fast, easy to design, and immune to the level of compensations. Thus, it is suitable for digital relaying applications.

  8. Improved artificial bee colony algorithm for wavefront sensor-less system in free space optical communication

    NASA Astrophysics Data System (ADS)

    Niu, Chaojun; Han, Xiang'e.

    2015-10-01

    Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.

  9. Improving intensity-modulated radiation therapy using the anatomic beam orientation optimization algorithm

    SciTech Connect

    Potrebko, Peter S.; McCurdy, Boyd M. C.; Butler, James B.; El-Gubtan, Adel S.

    2008-05-15

    A novel, anatomic beam orientation optimization (A-BOO) algorithm is proposed to significantly improve conventional intensity-modulated radiation therapy (IMRT). The A-BOO algorithm vectorially analyses polygonal surface mesh data of contoured patient anatomy. Five optimal (5-opt) deliverable beam orientations are selected based on (1) tangential orientation bisecting the target and adjacent organ's-at-risk (OARs) to produce precipitous dose gradients between them and (2) parallel incidence with polygon features of the target volume to facilitate conformal coverage. The 5-opt plans were compared to standard five, seven, and nine equiangular-spaced beam plans (5-equi, 7-equi, 9-equi) for: (1) gastric, (2) Radiation Therapy Oncology Group (RTOG) P-0126 prostate, and (3) RTOG H-0022 oropharyngeal (stage-III, IV) cancer patients. In the gastric case, the noncoplanar 5-opt plan reduced the right kidney V 20 Gy by 32.2%, 23.2%, and 20.6% compared to plans with five, seven, and nine equiangular-spaced beams. In the prostate case, the coplanar 5-opt plan produced similar rectal sparing as the 7-equi and 9-equi plans with a reduction of the V 75, V 70, V 65, and V 60 Gy of 2.4%, 5.3%, 7.0%, and 9.5% compared to the 5-equi plan. In the stage-III and IV oropharyngeal cases, the noncoplanar 5-opt plan substantially reduced the V 30 Gy and mean dose to the contralateral parotid compared to plans with five, seven, and nine equiangular-spaced beams: (stage-III) 7.1%, 5.2%, 6.8%, and 5.1, 3.5, 3.7 Gy and (stage-IV) 10.2%, 10.2%, 9.8% and 7.0, 7.1, 7.2 Gy. The geometry-based A-BOO algorithm has been demonstrated to be robust for application to a variety of IMRT treatment sites. Beam orientations producing significant improvements in OAR sparing over conventional IMRT can be automatically produced in minutes compared to hours with existing dose-based beam orientation optimization methods.

  10. Improvements to previous algorithms to predict gene structure and isoform concentrations using Affymetrix Exon arrays

    PubMed Central

    2010-01-01

    Background Exon arrays provide a way to measure the expression of different isoforms of genes in an organism. Most of the procedures to deal with these arrays are focused on gene expression or on exon expression. Although the only biological analytes that can be properly assigned a concentration are transcripts, there are very few algorithms that focus on them. The reason is that previously developed summarization methods do not work well if applied to transcripts. In addition, gene structure prediction, i.e., the correspondence between probes and novel isoforms, is a field which is still unexplored. Results We have modified and adapted a previous algorithm to take advantage of the special characteristics of the Affymetrix exon arrays. The structure and concentration of transcripts -some of them possibly unknown- in microarray experiments were predicted using this algorithm. Simulations showed that the suggested modifications improved both specificity (SP) and sensitivity (ST) of the predictions. The algorithm was also applied to different real datasets showing its effectiveness and the concordance with PCR validated results. Conclusions The proposed algorithm shows a substantial improvement in the performance over the previous version. This improvement is mainly due to the exploitation of the redundancy of the Affymetrix exon arrays. An R-Package of SPACE with the updated algorithms have been developed and is freely available. PMID:21110835

  11. Spectrum parameter estimation in Brillouin scattering distributed temperature sensor based on cuckoo search algorithm combined with the improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Yu, Chunjuan; Fu, Xinghu; Liu, Wenzhe; Bi, Weihong

    2015-12-01

    In the distributed optical fiber sensing system based on Brillouin scattering, strain and temperature are the main measuring parameters which can be obtained by analyzing the Brillouin center frequency shift. The novel algorithm which combines the cuckoo search algorithm (CS) with the improved differential evolution (IDE) algorithm is proposed for the Brillouin scattering parameter estimation. The CS-IDE algorithm is compared with CS algorithm and analyzed in different situation. The results show that both the CS and CS-IDE algorithm have very good convergence. The analysis reveals that the CS-IDE algorithm can extract the scattering spectrum features with different linear weight ratio, linewidth combination and SNR. Moreover, the BOTDR temperature measuring system based on electron optical frequency shift is set up to verify the effectiveness of the CS-IDE algorithm. Experimental results show that there is a good linear relationship between the Brillouin center frequency shift and temperature changes.

  12. An improved coarse-grained parallel algorithm for computational acceleration of ordinary Kriging interpolation

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong

    2015-05-01

    Heavy computation limits the use of Kriging interpolation methods in many real-time applications, especially with the ever-increasing problem size. Many researchers have realized that parallel processing techniques are critical to fully exploit computational resources and feasibly solve computation-intensive problems like Kriging. Much research has addressed the parallelization of traditional approach to Kriging, but this computation-intensive procedure may not be suitable for high-resolution interpolation of spatial data. On the basis of a more effective serial approach, we propose an improved coarse-grained parallel algorithm to accelerate ordinary Kriging interpolation. In particular, the interpolation task of each unobserved point is considered as a basic parallel unit. To reduce time complexity and memory consumption, the large right hand side matrix in the Kriging linear system is transformed and fixed at only two columns and therefore no longer directly relevant to the number of unobserved points. The MPI (Message Passing Interface) model is employed to implement our parallel programs in a homogeneous distributed memory system. Experimentally, the improved parallel algorithm performs better than the traditional one in spatial interpolation of annual average precipitation in Victoria, Australia. For example, when the number of processors is 24, the improved algorithm keeps speed-up at 20.8 while the speed-up of the traditional algorithm only reaches 9.3. Likewise, the weak scaling efficiency of the improved algorithm is nearly 90% while that of the traditional algorithm almost drops to 40% with 16 processors. Experimental results also demonstrate that the performance of the improved algorithm is enhanced by increasing the problem size.

  13. Improving the Response of a Wheel Speed Sensor by Using a RLS Lattice Algorithm

    PubMed Central

    Hernandez, Wilmar

    2006-01-01

    Among the complete family of sensors for automotive safety, consumer and industrial application, speed sensors stand out as one of the most important. Actually, speed sensors have the diversity to be used in a broad range of applications. In today's automotive industry, such sensors are used in the antilock braking system, the traction control system and the electronic stability program. Also, typical applications are cam and crank shaft position/speed and wheel and turbo shaft speed measurement. In addition, they are used to control a variety of functions, including fuel injection, ignition timing in engines, and so on. However, some types of speed sensors cannot respond to very low speeds for different reasons. What is more, the main reason why such sensors are not good at detecting very low speeds is that they are more susceptible to noise when the speed of the target is low. In short, they suffer from noise and generally only work at medium to high speeds. This is one of the drawbacks of the inductive (magnetic reluctance) speed sensors and is the case under study. Furthermore, there are other speed sensors like the differential Hall Effect sensors that are relatively immune to interference and noise, but they cannot detect static fields. This limits their operations to speeds which give a switching frequency greater than a minimum operating frequency. In short, this research is focused on improving the performance of a variable reluctance speed sensor placed in a car under performance tests by using a recursive least-squares (RLS) lattice algorithm. Such an algorithm is situated in an adaptive noise canceller and carries out an optimal estimation of the relevant signal coming from the sensor, which is buried in a broad-band noise background where we have little knowledge of the noise characteristics. The experimental results are satisfactory and show a significant improvement in the signal-to-noise ratio at the system output.

  14. [Application of improved locally linear embedding algorithm in dimensionality reduction of cancer gene expression data].

    PubMed

    Liu, Wenyuan; Wang, Chunlei; Wang, Baowen; Wang, Changwu

    2014-02-01

    Cancer gene expression data have the characteristics of high dimensionalities and small samples so it is necessary to perform dimensionality reduction of the data. Traditional linear dimensionality reduction approaches can not find the nonlinear relationship between the data points. In addition, they have bad dimensionality reduction results. Therefore a multiple weights locally linear embedding (LLE) algorithm with improved distance is introduced to perform dimensionality reduction in this study. We adopted an improved distance to calculate the neighbor of each data point in this algorithm, and then we introduced multiple sets of linearly independent local weight vectors for each neighbor, and obtained the embedding results in the low-dimensional space of the high-dimensional data by minimizing the reconstruction error. Experimental result showed that the multiple weights LLE algorithm with improved distance had good dimensionality reduction functions of the cancer gene expression data.

  15. Microcellular propagation prediction model based on an improved ray tracing algorithm.

    PubMed

    Liu, Z-Y; Guo, L-X; Fan, T-Q

    2013-11-01

    Two-dimensional (2D)/two-and-one-half-dimensional ray tracing (RT) algorithms for the use of the uniform theory of diffraction and geometrical optics are widely used for channel prediction in urban microcellular environments because of their high efficiency and reliable prediction accuracy. In this study, an improved RT algorithm based on the "orientation face set" concept and on the improved 2D polar sweep algorithm is proposed. The goal is to accelerate point-to-point prediction, thereby making RT prediction attractive and convenient. In addition, the use of threshold control of each ray path and the handling of visible grid points for reflection and diffraction sources are adopted, resulting in an improved efficiency of coverage prediction over large areas. Measured results and computed predictions are also compared for urban scenarios. The results indicate that the proposed prediction model works well and is a useful tool for microcellular communication applications.

  16. Classification of Non-Small Cell Lung Cancer Using Significance Analysis of Microarray-Gene Set Reduction Algorithm

    PubMed Central

    Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu

    2016-01-01

    Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed. PMID:27446945

  17. Classification of Non-Small Cell Lung Cancer Using Significance Analysis of Microarray-Gene Set Reduction Algorithm.

    PubMed

    Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu; Tian, Suyan

    2016-01-01

    Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed.

  18. Improved SPGD algorithm to avoid local extremum for incoherent beam combining

    NASA Astrophysics Data System (ADS)

    Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Wang, Tingfeng; Guo, Jin

    2017-01-01

    The stochastic parallel gradient descent (SPGD) algorithm and the fast steering mirrors (FSM) are applied for incoherent beam combining in this paper. An equation is derived to calculate the wavefront reflected from the FSM under certain control voltages and the relationship between the strength of random disturbances and the combing efficiency is discussed via simulations, indicating that the combining efficiency is inversely proportional to the square of the strength of disturbance. The maximum value of the acceptable disturbance can be determined though the fitting curve which presents an instructional way to reduce the disturbance in advance. Besides, the SPGD algorithm is improved to overcome the weakness of tending to be trapped in the local extremum in incoherent beam combining. In the proposed algorithm, pattern recognition is used to check whether the algorithm is trapped and an "additional move" can be applied to get out of local extremum. The results of simulations show that the proposed algorithm can improve the performance of the incoherent beam combining. Comparative simulations are conducted where the value of evaluation function is increased about 60% compared to the conventional algorithm under the same conditions. The threshold of disturbance also increases about 15% when the accepted value of evaluation function set to 0.8 in the normalized form showing the feasibility of the method. Also, statistical data shows the proposed method depends less on the gain coefficient.

  19. Kidney segmentation in CT sequences using SKFCM and improved GrowCut algorithm

    PubMed Central

    2015-01-01

    Background Organ segmentation is an important step in computer-aided diagnosis and pathology detection. Accurate kidney segmentation in abdominal computed tomography (CT) sequences is an essential and crucial task for surgical planning and navigation in kidney tumor ablation. However, kidney segmentation in CT is a substantially challenging work because the intensity values of kidney parenchyma are similar to those of adjacent structures. Results In this paper, a coarse-to-fine method was applied to segment kidney from CT images, which consists two stages including rough segmentation and refined segmentation. The rough segmentation is based on a kernel fuzzy C-means algorithm with spatial information (SKFCM) algorithm and the refined segmentation is implemented with improved GrowCut (IGC) algorithm. The SKFCM algorithm introduces a kernel function and spatial constraint into fuzzy c-means clustering (FCM) algorithm. The IGC algorithm makes good use of the continuity of CT sequences in space which can automatically generate the seed labels and improve the efficiency of segmentation. The experimental results performed on the whole dataset of abdominal CT images have shown that the proposed method is accurate and efficient. The method provides a sensitivity of 95.46% with specificity of 99.82% and performs better than other related methods. Conclusions Our method achieves high accuracy in kidney segmentation and considerably reduces the time and labor required for contour delineation. In addition, the method can be expanded to 3D segmentation directly without modification. PMID:26356850

  20. An improved finger-vein recognition algorithm based on template matching

    NASA Astrophysics Data System (ADS)

    Liu, Yueyue; Di, Si; Jin, Jian; Huang, Daoping

    2016-10-01

    Finger-vein recognition has became the most popular biometric identify methods. The investigation on the recognition algorithms always is the key point in this field. So far, there are many applicable algorithms have been developed. However, there are still some problems in practice, such as the variance of the finger position which may lead to the image distortion and shifting; during the identification process, some matching parameters determined according to experience may also reduce the adaptability of algorithm. Focus on above mentioned problems, this paper proposes an improved finger-vein recognition algorithm based on template matching. In order to enhance the robustness of the algorithm for the image distortion, the least squares error method is adopted to correct the oblique finger. During the feature extraction, local adaptive threshold method is adopted. As regard as the matching scores, we optimized the translation preferences as well as matching distance between the input images and register images on the basis of Naoto Miura algorithm. Experimental results indicate that the proposed method can improve the robustness effectively under the finger shifting and rotation conditions.

  1. Collaborative Project: Building improved optimized parameter estimation algorithms to improve methane and nitrogen fluxes in a climate model

    SciTech Connect

    Mahowald, Natalie

    2016-11-29

    earth science with limited numbers of simulations; and, c) will be (as part of the proposed research) significantly improved both by adding asynchronous parallelism, early truncation of unsuccessful simulations, and the improvement of both serial and parallel performance by the use of derivative and sensitivity information from global and local surrogate approximations S(x). The algorithm development and testing will be focused on the CLM-ME/N model application, but the methods are general and are expected to also perform well on optimization for parameter estimation of other climate models and other classes of continuous multimodal optimization problems arising from complex simulation models. In addition, this proposal will compile available datasets of emissions of methane, nitrous oxides and reactive nitrogen species and develop protocols for site level comparisons with the CLM-ME/N. Once the model parameters are optimized against site level data, the model will be simulated at the global level and compared to atmospheric concentration measurements for the current climate, and future emissions will be estimated using climate change as simulated by the CESM. This proposal combines experts in earth system modeling, optimization, computer science, and process level understanding of soil gas emissions in an interdisciplinary team in order to improve the modeling of methane and nitrogen gas emissions. This proposal thus meets the requirements of the SciDAC RFP, by integrating state-of-the-art computer science and earth system to build an improved earth system model.

  2. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation

    PubMed Central

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713

  3. Inhibition of class IIb histone deacetylase significantly improves cloning efficiency in mice.

    PubMed

    Ono, Tetsuo; Li, Chong; Mizutani, Eiji; Terashita, Yukari; Yamagata, Kazuo; Wakayama, Teruhiko

    2010-12-01

    Since the first mouse clone was produced by somatic cell nuclear transfer, the success rate of cloning in mice has been extremely low. Some histone deacetylase inhibitors, such as trichostatin A and scriptaid, have improved the full-term development of mouse clones significantly, but the mechanisms allowing for this are unclear. Here, we found that two other specific inhibitors, suberoylanilide hydroxamic acid and oxamflatin, could also reduce the rate of apoptosis in blastocysts, improve the full-term development of cloned mice, and increase establishment of nuclear transfer-generated embryonic stem cell lines significantly without leading to obvious abnormalities. However, another inhibitor, valproic acid, could not improve cloning efficiency. Suberoylanilide hydroxamic acid, oxamflatin, trichostatin A, and scriptaid are inhibitors for classes I and IIa/b histone deacetylase, whereas valproic acid is an inhibitor for classes I and IIa, suggesting that inhibiting class IIb histone deacetylase is an important step for reprogramming mouse cloning efficiency.

  4. Ultrasonic Imaging Using a Flexible Array: Improvements to the Maximum Contrast Autofocus Algorithm

    NASA Astrophysics Data System (ADS)

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2009-03-01

    In previous work, we have presented the maximum contrast autofocus algorithm for estimating unknown imaging parameters, e.g., for imaging through complicated surfaces using a flexible ultrasonic array. This paper details recent improvements to the algorithm. The algorithm operates by maximizing the image contrast metric with respect to the imaging parameters. For a flexible array, the relative positions of the array elements are parameterized using a cubic spline function and the spline control points are estimated by iterative maximisation of the image contrast via simulated annealing. The resultant spline gives an estimate of the array geometry and the profile of the surface that it has conformed to, allowing the generation of a well-focused image. A pre-processing step is introduced to obtain an initial estimate of the array geometry, reducing the time taken for the algorithm to convergence. Experimental results are demonstrated using a flexible array prototype.

  5. A Novel Adaptive Frequency Estimation Algorithm Based on Interpolation FFT and Improved Adaptive Notch Filter

    NASA Astrophysics Data System (ADS)

    Shen, Ting-ao; Li, Hua-nan; Zhang, Qi-xin; Li, Ming

    2017-02-01

    The convergence rate and the continuous tracking precision are two main problems of the existing adaptive notch filter (ANF) for frequency tracking. To solve the problems, the frequency is detected by interpolation FFT at first, which aims to overcome the convergence rate of the ANF. Then, referring to the idea of negative feedback, an evaluation factor is designed to monitor the ANF parameters and realize continuously high frequency tracking accuracy. According to the principle, a novel adaptive frequency estimation algorithm based on interpolation FFT and improved ANF is put forward. Its basic idea, specific measures and implementation steps are described in detail. The proposed algorithm obtains a fast estimation of the signal frequency, higher accuracy and better universality qualities. Simulation results verified the superiority and validity of the proposed algorithm when compared with original algorithms.

  6. ULTRASONIC IMAGING USING A FLEXIBLE ARRAY: IMPROVEMENTS TO THE MAXIMUM CONTRAST AUTOFOCUS ALGORITHM

    SciTech Connect

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2009-03-03

    In previous work, we have presented the maximum contrast autofocus algorithm for estimating unknown imaging parameters, e.g., for imaging through complicated surfaces using a flexible ultrasonic array. This paper details recent improvements to the algorithm. The algorithm operates by maximizing the image contrast metric with respect to the imaging parameters. For a flexible array, the relative positions of the array elements are parameterized using a cubic spline function and the spline control points are estimated by iterative maximisation of the image contrast via simulated annealing. The resultant spline gives an estimate of the array geometry and the profile of the surface that it has conformed to, allowing the generation of a well-focused image. A pre-processing step is introduced to obtain an initial estimate of the array geometry, reducing the time taken for the algorithm to convergence. Experimental results are demonstrated using a flexible array prototype.

  7. Improved discrete swarm intelligence algorithms for endmember extraction from hyperspectral remote sensing images

    NASA Astrophysics Data System (ADS)

    Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing

    2016-10-01

    Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.

  8. New shooting algorithms for transition path sampling: centering moves and varied-perturbation sizes for improved sampling.

    PubMed

    Rowley, Christopher N; Woo, Tom K

    2009-12-21

    Transition path sampling has been established as a powerful tool for studying the dynamics of rare events. The trajectory generation moves of this Monte Carlo procedure, shooting moves and shifting modes, were developed primarily for rate constant calculations, although this method has been more extensively used to study the dynamics of reactive processes. We have devised and implemented three alternative trajectory generation moves for use with transition path sampling. The centering-shooting move incorporates a shifting move into a shooting move, which centers the transition period in the middle of the trajectory, eliminating the need for shifting moves and generating an ensemble where the transition event consistently occurs near the middle of the trajectory. We have also developed varied-perturbation size shooting moves, wherein smaller perturbations are made if the shooting point is far from the transition event. The trajectories generated using these moves decorrelate significantly faster than with conventional, constant sized perturbations. This results in an increase in the statistical efficiency by a factor of 2.5-5 when compared to the conventional shooting algorithm. On the other hand, the new algorithm breaks detailed balance and introduces a small bias in the transition time distribution. We have developed a modification of this varied-perturbation size shooting algorithm that preserves detailed balance, albeit at the cost of decreased sampling efficiency. Both varied-perturbation size shooting algorithms are found to have improved sampling efficiency when compared to the original constant perturbation size shooting algorithm.

  9. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    PubMed Central

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  10. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    PubMed

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-06-18

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving.

  11. An improved PSO algorithm for parameter identification of nonlinear dynamic hysteretic models

    NASA Astrophysics Data System (ADS)

    Zhang, Junhao; Xia, Pinqi

    2017-02-01

    The nonlinear dynamic hysteretic models used in nonlinear dynamic analysis contain generally lots of model parameters which need to be identified accurately and effectively. The accuracy and effectiveness of identification depend generally on the complexity of model, number of model parameters and proximity of initial values of the parameters. The particle swarm optimization (PSO) algorithm has the random searching ability and has been widely applied to the parameter identification in the nonlinear dynamic hysteretic models. However, the PSO algorithm may get trapped in the local optimum and appear the premature convergence not to obtain the real optimum results. In this paper, an improved PSO algorithm for identifying parameters of nonlinear dynamic hysteretic models has been presented by defining a fitness function for hysteretic model. The improved PSO algorithm can enhance the global searching ability and avoid to appear the premature convergence of the conventional PSO algorithm, and has been applied to identify the parameters of two nonlinear dynamic hysteretic models which are the Leishman-Beddoes (LB) dynamic stall model of rotor blade and the anelastic displacement fields (ADF) model of elastomeric damper which can be used as the lead-lag damper in rotor. The accuracy and effectiveness of the improved PSO algorithm for identifying parameters of the LB model and the ADF model are validated by comparing the identified results with test results. The investigations have indicated that in order to reduce the influence of randomness caused by using the PSO algorithm on the accuracy of identified parameters, it is an effective method to increase the number of repeated identifications.

  12. Combined image-processing algorithms for improved optical coherence tomography of prostate nerves

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Weldon, Thomas P.; Fiddy, Michael A.; Fried, Nathaniel M.

    2010-07-01

    Cavernous nerves course along the surface of the prostate gland and are responsible for erectile function. These nerves are at risk of injury during surgical removal of a cancerous prostate gland. In this work, a combination of segmentation, denoising, and edge detection algorithms are applied to time-domain optical coherence tomography (OCT) images of rat prostate to improve identification of cavernous nerves. First, OCT images of the prostate are segmented to differentiate the cavernous nerves from the prostate gland. Then, a locally adaptive denoising algorithm using a dual-tree complex wavelet transform is applied to reduce speckle noise. Finally, edge detection is used to provide deeper imaging of the prostate gland. Combined application of these three algorithms results in improved signal-to-noise ratio, imaging depth, and automatic identification of the cavernous nerves, which may be of direct benefit for use in laparoscopic and robotic nerve-sparing prostate cancer surgery.

  13. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    PubMed Central

    He, Baochun; Ma, Zhiyuan; Zong, Mao; Zhou, Xiangrong; Fujita, Hiroshi

    2013-01-01

    A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method. PMID:24066017

  14. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  15. Improved nucleosome-positioning algorithm iNPS for accurate nucleosome positioning from sequencing data.

    PubMed

    Chen, Weizhong; Liu, Yi; Zhu, Shanshan; Green, Christopher D; Wei, Gang; Han, Jing-Dong Jackie

    2014-09-18

    Accurate determination of genome-wide nucleosome positioning can provide important insights into global gene regulation. Here, we describe the development of an improved nucleosome-positioning algorithm-iNPS-which achieves significantly better performance than the widely used NPS package. By determining nucleosome boundaries more precisely and merging or separating shoulder peaks based on local MNase-seq signals, iNPS can unambiguously detect 60% more nucleosomes. The detected nucleosomes display better nucleosome 'widths' and neighbouring centre-centre distance distributions, giving rise to sharper patterns and better phasing of average nucleosome profiles and higher consistency between independent data subsets. In addition to its unique advantage in classifying nucleosomes by shape to reveal their different biological properties, iNPS also achieves higher significance and lower false positive rates than previously published methods. The application of iNPS to T-cell activation data demonstrates a greater ability to facilitate detection of nucleosome repositioning, uncovering additional biological features underlying the activation process.

  16. Histone deacetylase inhibitor significantly improved the cloning efficiency of porcine somatic cell nuclear transfer embryos.

    PubMed

    Huang, Yongye; Tang, Xiaochun; Xie, Wanhua; Zhou, Yan; Li, Dong; Yao, Chaogang; Zhou, Yang; Zhu, Jianguo; Lai, Liangxue; Ouyang, Hongsheng; Pang, Daxin

    2011-12-01

    Valproic acid (VPA), a histone deacetylase inbibitor, has been shown to generate inducible pluripotent stem (iPS) cells from mouse and human fibroblasts with a significant higher efficiency. Because successful cloning by somatic cell nuclear transfer (SCNT) undergoes a full reprogramming process in which the epigenetic state of a differentiated donor nuclear is converted into an embryonic totipotent state, we speculated that VPA would be useful in promoting cloning efficiency. Therefore, in the present study, we examined whether VPA can promote the developmental competence of SCNT embryos by improving the reprogramming state of donor nucleus. Here we report that 1 mM VPA for 14 to 16 h following activation significantly increased the rate of blastocyst formation of porcine SCNT embryos constructed from Landrace fetal fibroblast cells compared to the control (31.8 vs. 11.4%). However, we found that the acetylation level of Histone H3 lysine 14 and Histone H4 lysine 5 and expression level of Oct4, Sox2, and Klf4 was not significantly changed between VPA-treated and -untreated groups at the blastocyst stage. The SCNT embryos were transferred to 38 surrogates, and the cloning efficiency in the treated group was significantly improved compared with the control group. Taken together, we have demonstrated that VPA can improve both in vitro and in vivo development competence of porcine SCNT embryos.

  17. Integrating soil information into canopy sensor algorithms for improved corn nitrogen rate recommendation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...

  18. An improved bi-level algorithm for partitioning dynamic grid hierarchies.

    SciTech Connect

    Deiterding, Ralf (California Institute of Technology, Pasadena, CA); Johansson, Henrik (Uppsala University, Uppsala, Sweden); Steensland, Johan; Ray, Jaideep

    2006-05-01

    Structured adaptive mesh refinement methods are being widely used for computer simulations of various physical phenomena. Parallel implementations potentially offer realistic simulations of complex three-dimensional applications. But achieving good scalability for large-scale applications is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Designed on sound SAMR principles, Nature+Fable is a hybrid, dedicated SAMR partitioning tool that brings together the advantages of both domain-based and patch-based techniques while avoiding their drawbacks. But the original bi-level partitioning approach in Nature+Fable is insufficient as it for realistic applications regards frequently occurring bi-levels as ''impossible'' and fails. This document describes an improved bi-level partitioning algorithm that successfully copes with all possible bi-levels. The improved algorithm uses the original approach side-by-side with a new, complementing approach. By using a new, customized classification method, the improved algorithm switches automatically between the two approaches. This document describes the algorithms, discusses implementation issues, and presents experimental results. The improved version of Nature+Fable was found to be able to handle realistic applications and also to generate less imbalances, similar box count, but more communication as compared to the native, domain-based partitioner in the SAMR framework AMROC.

  19. An improved random walk algorithm for the implicit Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Keady, Kendra P.; Cleveland, Mathew A.

    2017-01-01

    In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in "fully-gray" form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities are a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2-4 compared to standard RW, and a factor of ∼3-6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.

  20. On the Simulation of Sea States with High Significant Wave Height for the Validation of Parameter Retrieval Algorithms for Future Altimetry Missions

    NASA Astrophysics Data System (ADS)

    Kuschenerus, Mieke; Cullen, Robert

    2016-08-01

    To ensure reliability and precision of wave height estimates for future satellite altimetry missions such as Sentinel 6, reliable parameter retrieval algorithms that can extract significant wave heights up to 20 m have to be established. The retrieved parameters, i.e. the retrieval methods need to be validated extensively on a wide range of possible significant wave heights. Although current missions require wave height retrievals up to 20 m, there is little evidence of systematic validation of parameter retrieval methods for sea states with wave heights above 10 m. This paper provides a definition of a set of simulated sea states with significant wave height up to 20 m, that allow simulation of radar altimeter response echoes for extreme sea states in SAR and low resolution mode. The simulated radar responses are used to derive significant wave height estimates, which can be compared with the initial models, allowing precision estimations of the applied parameter retrieval methods. Thus we establish a validation method for significant wave height retrieval for sea states causing high significant wave heights, to allow improved understanding and planning of future satellite altimetry mission validation.

  1. Improving Multi-Component Maintenance Acquisition with a Greedy Heuristic Local Algorithm

    DTIC Science & Technology

    2013-04-01

    need to improve the decision making process for system sustainment including maintenance, repair, and overhaul ( MRO ) operations and the acquisition of... MRO parts. To help address the link between sustainment policies and acquisition, this work develops a greedy heuristic?based local search algorithm to...concerns, there is a need to improve the decision making process for system sustainment, including maintenance, repair, and overhaul ( MRO

  2. Improved progressive TIN densification filtering algorithm for airborne LiDAR data in forested areas

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoqian; Guo, Qinghua; Su, Yanjun; Xue, Baolin

    2016-07-01

    Filtering of light detection and ranging (LiDAR) data into the ground and non-ground points is a fundamental step in processing raw airborne LiDAR data. This paper proposes an improved progressive triangulated irregular network (TIN) densification (IPTD) filtering algorithm that can cope with a variety of forested landscapes, particularly both topographically and environmentally complex regions. The IPTD filtering algorithm consists of three steps: (1) acquiring potential ground seed points using the morphological method; (2) obtaining accurate ground seed points; and (3) building a TIN-based model and iteratively densifying TIN. The IPTD filtering algorithm was tested in 15 forested sites with various terrains (i.e., elevation and slope) and vegetation conditions (i.e., canopy cover and tree height), and was compared with seven other commonly used filtering algorithms (including morphology-based, slope-based, and interpolation-based filtering algorithms). Results show that the IPTD achieves the highest filtering accuracy for nine of the 15 sites. In general, it outperforms the other filtering algorithms, yielding the lowest average total error of 3.15% and the highest average kappa coefficient of 89.53%.

  3. Improved direct cover heuristic algorithms for synthesis of multiple-valued logic functions

    NASA Astrophysics Data System (ADS)

    Abd-El-Barr, Mostafa I.; Khan, Esam A.

    2014-02-01

    Multiple-valued logic (MVL) circuits using complementary metal-oxide semiconductor (CMOS) technology have been successfully used in implementing a number of digital signal processing (DSP) applications. Heuristic algorithms using the direct cover (DC) approach have been widely used in synthesising (near) minimal two-level realisation of MVL functions. This article presents three improved DC-based algorithms: weighted direct-cover (WDC), ordered direct-cover (ODC) and fuzzy direct-cover (FDC). In the WDC, a weighted-sum scheme for combining a number of different criteria for minterm and implicant selection was applied. In the ODC, a set of criteria for the selection of appropriate minterm and implicant was applied in a specific order. In the FDC, a fuzzy-based algorithm for minterm and implicant selection was introduced. The proposed heuristic algorithms were tested using two sets of benchmarks. The first consists of 50,000 2-variable 4-valued randomly generated functions and the second consists of 50,000 2-variable 5-valued randomly generated functions. The results obtained using the three heuristic algorithms were compared to those obtained using three existing DC-based techniques. It is shown that the heuristic algorithms outperform existing DC-based approaches in terms of the average number of product terms (a measure of the chip area consumed) required to realise a given MVL function.

  4. Medical Image Encryption: An Application for Improved Padding Based GGH Encryption Algorithm.

    PubMed

    Sokouti, Massoud; Zakerolhosseini, Ali; Sokouti, Babak

    2016-01-01

    Medical images are regarded as important and sensitive data in the medical informatics systems. For transferring medical images over an insecure network, developing a secure encryption algorithm is necessary. Among the three main properties of security services ( i.e. , confidentiality, integrity, and availability), the confidentiality is the most essential feature for exchanging medical images among physicians. The Goldreich Goldwasser Halevi (GGH) algorithm can be a good choice for encrypting medical images as both the algorithm and sensitive data are represented by numeric matrices. Additionally, the GGH algorithm does not increase the size of the image and hence, its complexity will remain as simple as O(n(2) ). However, one of the disadvantages of using the GGH algorithm is the Chosen Cipher Text attack. In our strategy, this shortcoming of GGH algorithm has been taken in to consideration and has been improved by applying the padding (i.e., snail tour XORing), before the GGH encryption process. For evaluating their performances, three measurement criteria are considered including (i) Number of Pixels Change Rate (NPCR), (ii) Unified Average Changing Intensity (UACI), and (iii) Avalanche effect. The results on three different sizes of images showed that padding GGH approach has improved UACI, NPCR, and Avalanche by almost 100%, 35%, and 45%, respectively, in comparison to the standard GGH algorithm. Also, the outcomes will make the padding GGH resist against the cipher text, the chosen cipher text, and the statistical attacks. Furthermore, increasing the avalanche effect of more than 50% is a promising achievement in comparison to the increased complexities of the proposed method in terms of encryption and decryption processes.

  5. Medical Image Encryption: An Application for Improved Padding Based GGH Encryption Algorithm

    PubMed Central

    Sokouti, Massoud; Zakerolhosseini, Ali; Sokouti, Babak

    2016-01-01

    Medical images are regarded as important and sensitive data in the medical informatics systems. For transferring medical images over an insecure network, developing a secure encryption algorithm is necessary. Among the three main properties of security services (i.e., confidentiality, integrity, and availability), the confidentiality is the most essential feature for exchanging medical images among physicians. The Goldreich Goldwasser Halevi (GGH) algorithm can be a good choice for encrypting medical images as both the algorithm and sensitive data are represented by numeric matrices. Additionally, the GGH algorithm does not increase the size of the image and hence, its complexity will remain as simple as O(n2). However, one of the disadvantages of using the GGH algorithm is the Chosen Cipher Text attack. In our strategy, this shortcoming of GGH algorithm has been taken in to consideration and has been improved by applying the padding (i.e., snail tour XORing), before the GGH encryption process. For evaluating their performances, three measurement criteria are considered including (i) Number of Pixels Change Rate (NPCR), (ii) Unified Average Changing Intensity (UACI), and (iii) Avalanche effect. The results on three different sizes of images showed that padding GGH approach has improved UACI, NPCR, and Avalanche by almost 100%, 35%, and 45%, respectively, in comparison to the standard GGH algorithm. Also, the outcomes will make the padding GGH resist against the cipher text, the chosen cipher text, and the statistical attacks. Furthermore, increasing the avalanche effect of more than 50% is a promising achievement in comparison to the increased complexities of the proposed method in terms of encryption and decryption processes. PMID:27857824

  6. An improved lossless group compression algorithm for seismic data in SEG-Y and MiniSEED file formats

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shen, Tong; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2017-03-01

    An improved lossless group compression algorithm is proposed for decreasing the size of SEG-Y files to relieve the enormous burden associated with the transmission and storage of large amounts of seismic exploration data. Because each data point is represented by 4 bytes in SEG-Y files, the file is broken down into 4 subgroups, and the Gini coefficient is employed to analyze the distribution of the overall data and each of the 4 data subgroups within the range [0,255]. The results show that each subgroup comprises characteristic frequency distributions suited to distinct compression algorithms. Therefore, the data of each subgroup was compressed using its best suited algorithm. After comparing the compression ratios obtained for each data subgroup using different algorithms, the Lempel-Ziv-Markov chain algorithm (LZMA) was selected for the compression of the first two subgroups and the Deflate algorithm for the latter two subgroups. The compression ratios and decompression times obtained with the improved algorithm were compared with those obtained with commonly employed compression algorithms for SEG-Y files with different sizes. The experimental results show that the improved algorithm provides a compression ratio of 75-80%, which is more effective than compression algorithms presently applied to SEG-Y files. In addition, the proposed algorithm is applied to the miniSEED format used in natural earthquake monitoring, and the results compared with those obtained using the Steim2 compression algorithm, the results again show that the proposed algorithm provides better data compression.

  7. Analysis of longitudinal variations in North Pacific alkalinity to improve predictive algorithms

    NASA Astrophysics Data System (ADS)

    Fry, Claudia H.; Tyrrell, Toby; Achterberg, Eric P.

    2016-10-01

    The causes of natural variation in alkalinity in the North Pacific surface ocean need to be investigated to understand the carbon cycle and to improve predictive algorithms. We used GLODAPv2 to test hypotheses on the causes of three longitudinal phenomena in Alk*, a tracer of calcium carbonate cycling. These phenomena are (a) an increase from east to west between 45°N and 55°N, (b) an increase from west to east between 25°N and 40°N, and (c) a minor increase from west to east in the equatorial upwelling region. Between 45°N and 55°N, Alk* is higher on the western than on the eastern side, and this is associated with denser isopycnals with higher Alk* lying at shallower depths. Between 25°N and 40°N, upwelling along the North American continental shelf causes higher Alk* in the east. Along the equator, a strong east-west trend was not observed, even though the upwelling on the eastern side of the basin is more intense, because the water brought to the surface is not high in Alk*. We created two algorithms to predict alkalinity, one for the entire Pacific Ocean north of 30°S and one for the eastern margin. The Pacific Ocean algorithm is more accurate than the commonly used algorithm published by Lee et al. (2006), of similar accuracy to the best previously published algorithm by Sasse et al. (2013), and is less biased with longitude than other algorithms in the subpolar North Pacific. Our eastern margin algorithm is more accurate than previously published algorithms.

  8. Improved Surface and Tropospheric Temperatures Determined Using Only Shortwave Channels: The AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2011-01-01

    The Goddard DISC has generated products derived from AIRS/AMSU-A observations, starting from September 2002 when the AIRS instrument became stable, using the AIRS Science Team Version-5 retrieval algorithm. The AIRS Science Team Version-6 retrieval algorithm will be finalized in September 2011. This paper describes some of the significant improvements contained in the Version-6 retrieval algorithm, compared to that used in Version-5, with an emphasis on the improvement of atmospheric temperature profiles, ocean and land surface skin temperatures, and ocean and land surface spectral emissivities. AIRS contains 2378 spectral channels covering portions of the spectral region 650 cm(sup -1) (15.38 micrometers) - 2665 cm(sup -1) (3.752 micrometers). These spectral regions contain significant absorption features from two CO2 absorption bands, the 15 micrometers (longwave) CO2 band, and the 4.3 micrometers (shortwave) CO2 absorption band. There are also two atmospheric window regions, the 12 micrometer - 8 micrometer (longwave) window, and the 4.17 micrometer - 3.75 micrometer (shortwave) window. Historically, determination of surface and atmospheric temperatures from satellite observations was performed using primarily observations in the longwave window and CO2 absorption regions. According to cloud clearing theory, more accurate soundings of both surface skin and atmospheric temperatures can be obtained under partial cloud cover conditions if one uses observations in longwave channels to determine coefficients which generate cloud cleared radiances R(sup ^)(sub i) for all channels, and uses R(sup ^)(sub i) only from shortwave channels in the determination of surface and atmospheric temperatures. This procedure is now being used in the AIRS Version-6 Retrieval Algorithm. Results are presented for both daytime and nighttime conditions showing improved Version-6 surface and atmospheric soundings under partial cloud cover.

  9. Improving image reconstruction algorithm for rotating modulation collimators using a variance stabilizing transform

    NASA Astrophysics Data System (ADS)

    Shin, Y.; Kim, G.; Lee, G.

    2017-01-01

    A rotating modulation collimator (RMC) is an indirect imaging technique that has proven useful for remote radiation source detection. While it was initially invented for detecting sources in a far field, a recent development by Kowash has shown the feasibility of the RMC for detecting mid-range sources. However, their image reconstruction algorithm often produces spurious source estimates in pixels where no source exists. In this paper, we propose to improve the RMC image quality using a variance stabilizing transform. The transform reduces the inhomogeneous Poisson noise in the RMC data. In our simulation study, the proposed algorithm could reconstruct RMC images without misleading artifacts.

  10. An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation

    NASA Technical Reports Server (NTRS)

    Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.

    2000-01-01

    A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.

  11. Abdomen disease diagnosis in CT images using flexiscale curvelet transform and improved genetic algorithm.

    PubMed

    Sethi, Gaurav; Saini, B S

    2015-12-01

    This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases.

  12. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Huibin, Liu; Jun, Zhang

    2016-04-01

    Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network.

  13. An improved robust blind motion de-blurring algorithm for remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yulong; Liu, Jin; Liang, Yonghui

    2016-10-01

    Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.

  14. A Novel Optimization Technique to Improve Gas Recognition by Electronic Noses Based on the Enhanced Krill Herd Algorithm

    PubMed Central

    Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan

    2016-01-01

    An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C6H6), toluene (C7H8), formaldehyde (CH2O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms’ applications in all E-nose application areas. PMID

  15. Operationality Improvement Control of Electric Power Assisted Wheelchair by Fuzzy Algorithm Considering Posture Angle

    NASA Astrophysics Data System (ADS)

    Murakami, Hiroki; Seki, Hirokazu; Minakata, Hideaki; Tadakuma, Susumu

    This paper describes a novel operationality improvement control for electric power assisted wheelchairs. “Electric power assisted wheelchair” which assists the driving force by electric motors is expected to be widely used as a mobility support system for elderly people and disabled people, however, the performance of the straight and circular road driving must be further improved because the two wheels drive independently. This paper proposes a novel operationality improvement control by fuzzy algorithm to realize the stable driving on straight and circular roads. The suitable assisted torque of the right and left wheels is determined by fuzzy algorithm based on the posture angular velocity, the posture angle of the wheelchair, the human input torque proportion and the total human torque of the right and left wheels. Some experiments on the practical roads show the effectiveness of the proposed control system.

  16. Improved radar data processing algorithms for quantitative rainfall estimation in real time.

    PubMed

    Krämer, S; Verworn, H R

    2009-01-01

    This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time.

  17. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    DOE PAGES

    Newman, Jennifer F.; Clifton, Andrew

    2017-02-10

    Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine

  18. An improved algorithm of three B-spline curve interpolation and simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Wanjun; Xu, Dongmei; Meng, Xinhong; Zhang, Feng

    2017-03-01

    As a key interpolation technique in CNC system machine tool, three B-spline curve interpolator has been proposed to change the drawbacks caused by linear and circular interpolator, Such as interpolation time bigger, three B-spline curves step error are not easy changed,and so on. This paper an improved algorithm of three B-spline curve interpolation and simulation is proposed. By Using MATALAB 7.0 computer soft in three B-spline curve interpolation is developed for verifying the proposed modification algorithm of three B-spline curve interpolation experimentally. The simulation results show that the algorithm is correct; it is consistent with a three B-spline curve interpolation requirements.

  19. An improved clustering algorithm of tunnel monitoring data for cloud computing.

    PubMed

    Zhong, Luo; Tang, KunHao; Li, Lin; Yang, Guang; Ye, JingJing

    2014-01-01

    With the rapid development of urban construction, the number of urban tunnels is increasing and the data they produce become more and more complex. It results in the fact that the traditional clustering algorithm cannot handle the mass data of the tunnel. To solve this problem, an improved parallel clustering algorithm based on k-means has been proposed. It is a clustering algorithm using the MapReduce within cloud computing that deals with data. It not only has the advantage of being used to deal with mass data but also is more efficient. Moreover, it is able to compute the average dissimilarity degree of each cluster in order to clean the abnormal data.

  20. A novel reagent significantly improved assay robustness in imaged capillary isoelectric focusing.

    PubMed

    Zhang, Xin; Voronov, Sergey; Mussa, Nesredin; Li, Zhengjian

    2017-03-15

    Imaged Capillary Isoelectric Focusing (icIEF) has been used as primary method for charge variants analysis of therapeutic antibodies and proteins [1], [9]. Proteins tend to precipitate around their pI values during focusing [14], which directly affects the reproducibility of their charge profiles. Protein concentration, focusing time and various supplementing additives are key parameters to minimize the protein precipitation and aggregation. Urea and sucrose are common additives to reduce protein aggregation, solubilize proteins in sample matrix and therefore improve assay repeatability [15]. However some proteins and antibodies are exceptions, we found urea and sucrose are not sufficient for a typical fusion protein (Fusion protein A) in icIEF assay and high variability is observed. We report a novel reagent, formamide, significantly improved reproducibility of protein charge profiles. Our results show formamide is a good supplementary reagent to reduce aggregation and stabilize proteins in isoelectric focusing. We further confirmed the method robustness, linearity, accuracy and precision after introducing the new reagent; extremely tight pI values, significantly improved method precision and sample on-board stability are achieved by formamide. Formamide is also proven to be equally functional to multiple antibodies as urea, which makes it an extra tool in icIEF method development.

  1. A peptide-retrieval strategy enables significant improvement of quantitative performance without compromising confidence of identification.

    PubMed

    Tu, Chengjian; Shen, Shichen; Sheng, Quanhu; Shyr, Yu; Qu, Jun

    2017-01-30

    Reliable quantification of low-abundance proteins in complex proteomes is challenging largely owing to the limited number of spectra/peptides identified. In this study we developed a straightforward method to improve the quantitative accuracy and precision of proteins by strategically retrieving the less confident peptides that were previously filtered out using the standard target-decoy search strategy. The filtered-out MS/MS spectra matched to confidently-identified proteins were recovered, and the peptide-spectrum-match FDR were re-calculated and controlled at a confident level of FDR≤1%, while protein FDR maintained at ~1%. We evaluated the performance of this strategy in both spectral count- and ion current-based methods. >60% increase of total quantified spectra/peptides was respectively achieved for analyzing a spike-in sample set and a public dataset from CPTAC. Incorporating the peptide retrieval strategy significantly improved the quantitative accuracy and precision, especially for low-abundance proteins (e.g. one-hit proteins). Moreover, the capacity of confidently discovering significantly-altered proteins was also enhanced substantially, as demonstrated with two spike-in datasets. In summary, improved quantitative performance was achieved by this peptide recovery strategy without compromising confidence of protein identification, which can be readily implemented in a broad range of quantitative proteomics techniques including label-free or labeling approaches.

  2. Pretreatment of bovine sperm with dithiobutylamine (DTBA) significantly improves embryo development after ICSI

    PubMed Central

    SUTTIROJPATTANA, Tayita; SOMFAI, Tamas; MATOBA, Satoko; NAGAI, Takashi; PARNPAI, Rangsun; GESHI, Masaya

    2016-01-01

    We assessed the effect of pretreating sperm with dithiobutylamine (DTBA) to improve embryo development by intracytoplasmic sperm injection (ICSI) in cows. Acridine Orange staining revealed that when applied at different concentrations (2.5, 5, and 10 mM) and exposure times (5 min, 20 min, 1 h, and 2 h), DTBA reduced disulfide bonds in spermatozoa with the highest efficacy at 5 mM for 5 min. DTBA enhanced the percentage of spermatozoa with free protamine thiol groups compared with untreated spermatozoa (control) (P < 0.05); however, this result did not differ from that of dithiothreitol (DTT) treatment. The percentage of live spermatozoa after DTBA treatment was identical to that in the control, but significantly higher than that after DTT treatment (P < 0.05). After ICSI, DTBA treatment tended to improve male pronuclear formation rate (P = 0.071) compared with non-treated sperm injection. Blastocyst formation rate was significantly improved by DTBA treatment compared with that in DTT, control, and sham injection groups (P < 0.05). Blastocyst quality in terms of cell numbers and ploidy was not different among these groups. In conclusion, DTBA increases the efficacy of blastocyst production by ICSI even if DTT treatment does not work. PMID:27523189

  3. PXD101 significantly improves nuclear reprogramming and the in vitro developmental competence of porcine SCNT embryos

    SciTech Connect

    Jin, Jun-Xue; Kang, Jin-Dan; Li, Suo; Jin, Long; Zhu, Hai-Ying; Guo, Qing; Gao, Qing-Shan; Yan, Chang-Guo; Yin, Xi-Jun

    2015-01-02

    Highlights: • First explored that the effects of PXD101 on the development of SCNT embryos in vitro. • 0.5 μM PXD101 treated for 24 h improved the development of porcine SCNT embryos. • Level of AcH3K9 was significantly higher than control group at early stages. - Abstract: In this study, we investigated the effects of the histone deacetylase inhibitor PXD101 (belinostat) on the preimplantation development of porcine somatic cell nuclear transfer (SCNT) embryos and their expression of the epigenetic markers histone H3 acetylated at lysine 9 (AcH3K9). We compared the in vitro developmental competence of SCNT embryos treated with various concentrations of PXD101 for 24 h. Treatment with 0.5 μM PXD101 significantly increased the proportion of SCNT embryos that reached the blastocyst stage, in comparison to the control group (23.3% vs. 11.5%, P < 0.05). We tested the in vitro developmental competence of SCNT embryos treated with 0.5 μM PXD101 for various amounts of times following activation. Treatment for 24 h significantly improved the development of porcine SCNT embryos, with a significantly higher proportion of embryos reaching the blastocyst stage in comparison to the control group (25.7% vs. 10.6%, P < 0.05). PXD101-treated SCNT embryos were transferred into two surrogate sows, one of whom became pregnant and four fetuses developed. PXD101 treatment significantly increased the fluorescence intensity of immunostaining for AcH3K9 in embryos at the pseudo-pronuclear and 2-cell stages. At these stages, the fluorescence intensities of immunostaining for AcH3K9 were significantly higher in PXD101-treated embryos than in control untreated embryos. In conclusion, this study demonstrates that PXD101 can significantly improve the in vitro and in vivo developmental competence of porcine SCNT embryos and can enhance their nuclear reprogramming.

  4. Improved particle swarm optimization algorithm for android medical care IOT using modified parameters.

    PubMed

    Sung, Wen-Tsai; Chiang, Yen-Chun

    2012-12-01

    This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services.

  5. Improving Significant Wave Height detection for Coastal Satellite Altimetry: validation in the German Bight.

    NASA Astrophysics Data System (ADS)

    Passaro, Marcello; Benveniste, Jérôme; Cipollini, Paolo; Fenoglio-Marc, Luciana

    For more than two decades, it has been possible to map the Significant Wave Height (SWH) globally through Satellite Altimetry. SWH estimation is possible because the shape of an altimetric waveform, which usually presents a sharp leading edge and a slowly decaying trailing edge, depends on the sea state: in particular, the higher the sea state, the longer the rising time of the leading edge. The algorithm for SWH also depends on the width of the point target response (PTR) function, which is usually approximated by a constant value that contributes to the rising time. Particularly challenging for SWH detection are coastal data and low sea states. The first are usually flagged as unreliable due to land and calm water interference in the altimeter footprint; the second are characterized by an extremely sharp leading edge that is consequently poorly sampled in the digitalized waveform. ALES, a new algorithm for reprocessing altimetric waveforms, has recently been validated for sea surface height estimation (Passaro et al. 2014). The aim of this work is to check its validity also for SWH estimation in a particularly challenging area. The German Bight region presents both low sea state and coastal issues and is particularly suitable for validation, thanks to the extended network of buoys of the Bundesamt für Seeschifffahrt und Hydrographie (BSH). In-situ data include open sea, off-shore and coastal sea conditions, respectively at the Helgoland, lighthouse Alte Weser and Westerland locations. Reprocessed data from Envisat, Jason-1 and Jason-2 tracks are validated against those three buoys. The in-situ validation is applied both at the nearest point and at points along-track. The skill metrics is based on bias, standard deviation, slope of regression line, scatter index, number of cycles with correlation larger than 90%. The same metrics is applied to the altimeter data obtained by standard processing and the validation results are compared. Data are evaluated at high

  6. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  7. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  8. Experimental verification of an interpolation algorithm for improved estimates of animal position.

    PubMed

    Schell, Chad; Jaffe, Jules S

    2004-07-01

    This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied "ex post facto" to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.

  9. Application of Innovative Hemocytometric Parameters and Algorithms for Improvement of Microcytic Anemia Discrimination.

    PubMed

    Schoorl, Margreet; Schoorl, Marianne; van Pelt, Johannes; Bartels, Piet C M

    2015-06-03

    Hemocytometric parameters like red blood cell (RBC) count, mean red blood cell volume (MCV), reticulocyte count, red blood cell distribution width (RDW-SD) and zinc protoporphyrin (ZPP) are frequently established for discrimination between iron-deficiency anemia and thalassemia in subjects with microcytic erythropoiesis. However, no single marker or combination of tests is optimal for discrimination between iron-deficiency anemia and thalassemia. This is the reason why many algorithms have been introduced. However, application of conventional algorithms, only resulted in appropriate classification of 30-40% of subjects. In this mini-review the efficacy of innovative hematological parameters for detection of alterations in RBCs has been considered. It refers to parameters concerning hemoglobinization of RBCs and reticulocytes and the percentages microcytic and hypochromic RBCs, for discrimination between subjects with iron-deficiency anemia (IDA) or thalassemia as well as a combination of both. A new discriminating tool including the above mentioned parameters was developed, based on two precondition steps and discriminating algorithms. The percentage microcytic RBCs is considered in the first precondition step. MCV, RDW-SD and RBC count are applied in the second precondition step. Subsequently, new algorithms, including conventional as well as innovative hematological parameters, were assessed for subgroups with microcytic erythropoiesis. The new algorithms for IDA discrimination yielded results for sensitivity of 79%, specificity of 97%, positive and negative predictive values of 74% and 98% respectively. The algorithms for β-thalassemia discrimination revealed similar results (74%, 98%, 75% and 99% respectively). We advocate that innovative algorithms, including parameters reflecting hemoglobinization of RBCs and reticulocytes, are integrated in an easily accessible software program linked to the hematology equipment to improve the discrimination between IDA and

  10. Application of Innovative Hemocytometric Parameters and Algorithms for Improvement of Microcytic Anemia Discrimination

    PubMed Central

    Schoorl, Margreet; Schoorl, Marianne; van Pelt, Johannes; Bartels, Piet C.M.

    2015-01-01

    Hemocytometric parameters like red blood cell (RBC) count, mean red blood cell volume (MCV), reticulocyte count, red blood cell distribution width (RDW-SD) and zinc protoporphyrin (ZPP) are frequently established for discrimination between iron-deficiency anemia and thalassemia in subjects with microcytic erythropoiesis. However, no single marker or combination of tests is optimal for discrimination between iron-deficiency anemia and thalassemia. This is the reason why many algorithms have been introduced. However, application of conventional algorithms, only resulted in appropriate classification of 30-40% of subjects. In this mini-review the efficacy of innovative hematological parameters for detection of alterations in RBCs has been considered. It refers to parameters concerning hemoglobinization of RBCs and reticulocytes and the percentages microcytic and hypochromic RBCs, for discrimination between subjects with iron-deficiency anemia (IDA) or thalassemia as well as a combination of both. A new discriminating tool including the above mentioned parameters was developed, based on two precondition steps and discriminating algorithms. The percentage microcytic RBCs is considered in the first precondition step. MCV, RDW-SD and RBC count are applied in the second precondition step. Subsequently, new algorithms, including conventional as well as innovative hematological parameters, were assessed for subgroups with microcytic erythropoiesis. The new algorithms for IDA discrimination yielded results for sensitivity of 79%, specificity of 97%, positive and negative predictive values of 74% and 98% respectively. The algorithms for β-thalassemia discrimination revealed similar results (74%, 98%, 75% and 99% respectively). We advocate that innovative algorithms, including parameters reflecting hemoglobinization of RBCs and reticulocytes, are integrated in an easily accessible software program linked to the hematology equipment to improve the discrimination between IDA and

  11. Significant Improvement in Survival after Unrelated Donor Hematopoietic Cell Transplantation in the Recent Era

    PubMed Central

    Majhail, Navneet S; Chitphakdithai, Pintip; Logan, Brent; King, Roberta; Devine, Steven; Rossmann, Susan N; Hale, Gregory; Hartzman, Robert J; Karanes, Chatchada; Laport, Ginna G; Nemecek, Eneida; Snyder, Edward L; Switzer, Galen E; Miller, John; Navarro, Willis; Confer, Dennis L; Levine, John E

    2014-01-01

    Patients and physicians may defer unrelated donor hematopoietic cell transplantation (HCT) as curative therapy due to mortality risk associated with the procedure. Therefore, it is important for physicians to know the current outcomes data when counseling potential candidates. To provide this information, we evaluated 15,059 unrelated donor HCT recipients between 2000-2009. We compared outcomes before and after 2005 for four cohorts: age <18 years with malignant diseases (N=1,920), 18-59 years with malignant diseases (N=9,575), ≥60 years with malignant diseases (N=2,194), and non-malignant diseases (N=1,370). Three-year overall survival in 2005-2009 was significantly better in all four cohorts (<18 years: 55% vs. 45%, 18-59 years: 42% vs. 35%, ≥60 years: 35% vs. 25%, non-malignant diseases: 69% vs. 60%, P<0.001 for all comparisons). Multivariate analyses in leukemia patients receiving HLA 7-8/8 matched transplants showed significant reduction in overall and non-relapse mortality in the first 1-year after HCT among patients transplanted in 2005-2009; however, risks for relapse did not change over time. Significant survival improvements after unrelated donor HCT have occurred over the recent decade and can be partly explained by better patient selection (e.g., HCT earlier in the disease course and lower disease risk), improved donor selection (e.g., more precise allele-level matched unrelated donors) and changes in transplant practices. PMID:25445638

  12. Reasons why current speech-enhancement algorithms do not improve speech intelligibility and suggested solutions.

    PubMed

    Loizou, Philipos C; Kim, Gibak

    2011-01-01

    Existing speech enhancement algorithms can improve speech quality but not speech intelligibility, and the reasons for that are unclear. In the present paper, we present a theoretical framework that can be used to analyze potential factors that can influence the intelligibility of processed speech. More specifically, this framework focuses on the fine-grain analysis of the distortions introduced by speech enhancement algorithms. It is hypothesized that if these distortions are properly controlled, then large gains in intelligibility can be achieved. To test this hypothesis, intelligibility tests are conducted with human listeners in which we present processed speech with controlled speech distortions. The aim of these tests is to assess the perceptual effect of the various distortions that can be introduced by speech enhancement algorithms on speech intelligibility. Results with three different enhancement algorithms indicated that certain distortions are more detrimental to speech intelligibility degradation than others. When these distortions were properly controlled, however, large gains in intelligibility were obtained by human listeners, even by spectral-subtractive algorithms which are known to degrade speech quality and intelligibility.

  13. Improvements to a five-phase ABS algorithm for experimental validation

    NASA Astrophysics Data System (ADS)

    Gerard, Mathieu; Pasillas-Lépine, William; de Vries, Edwin; Verhaegen, Michel

    2012-10-01

    The anti-lock braking system (ABS) is the most important active safety system for passenger cars. Unfortunately, the literature is not really precise about its description, stability and performance. This research improves a five-phase hybrid ABS control algorithm based on wheel deceleration [W. Pasillas-Lépine, Hybrid modeling and limit cycle analysis for a class of five-phase anti-lock brake algorithms, Veh. Syst. Dyn. 44 (2006), pp. 173-188] and validates it on a tyre-in-the-loop laboratory facility. Five relevant effects are modelled so that the simulation matches the reality: oscillations in measurements, wheel acceleration reconstruction, brake pressure dynamics, brake efficiency changes and tyre relaxation. The time delays in measurement and actuation have been identified as the main difficulty for the initial algorithm to work in practice. Three methods are proposed in order to deal with these delays. It is verified that the ABS limit cycles encircle the optimal braking point, without assuming any tyre parameter being a priori known. The ABS algorithm is compared with the commercial algorithm developed by Bosch.

  14. Intestinal-borne dermatoses significantly improved by oral application of Escherichia coli Nissle 1917

    PubMed Central

    Manzhalii, Elina; Hornuss, Daniel; Stremmel, Wolfgang

    2016-01-01

    AIM: To evaluate the effect of oral Escherichia coli (E. coli) Nissle application on the outcome of intestinal-borne dermatoses. METHODS: In a randomized, controlled, non-blinded prospective clinical trial 82 patients with intestinal-borne facial dermatoses characterized by an erythematous papular-pustular rash were screened. At the initiation visit 37 patients entered the experimental arm and 20 patients constituted the control arm. All 57 patients were treated with a vegetarian diet and conventional topical therapy of the dermatoses with ointments containing tetracycline, steroids and retinoids. In the experimental arm patients received a one month therapy with oral E. coli Nissle at a maintenance dose of 2 capsules daily. The experimental group was compared to a non-treatment group only receiving the diet and topical therapy. The primary outcome parameter was improvement of the dermatoses, secondary parameters included life quality and adverse events. In addition the immunological reaction profile (IgA, interleucin-8 and interferon-α) was determined. Furthermore the changes of stool consistency and the microbiota composition over the time of intervention were recorded. RESULTS: Eighty-nine percent of the patients with acne, papular-pustular rosacea and seborrhoic dermatitis responded to E. coli Nissle therapy with significant amelioration or complete recovery in contrast to 56% in the control arm (P < 0.01). Accordingly, in the E. coli Nissle treated patients life quality improved significantly (P < 0.01), and adverse events were not recorded. The clinical improvement was associated with a significant increase of IgA levels to normal values in serum as well as suppression of the proinflammatory cytokine IL-8 (P < 0.01 for both parameters). In the E. coli Nissle treated group a shift towards a protective microbiota with predominance of bifidobacteria and lactobacteria (> 107 CFU/g stool) was observed in 79% and 63% of the patients, respectively (P < 0

  15. An improved SIFT algorithm in the application of close-range Stereo image matching

    NASA Astrophysics Data System (ADS)

    Zhang, Xuehua; Wang, Xiaoqing; Yuan, Xiaoxiang; Wang, Shumin

    2016-11-01

    As unmanned aerial vehicle (UAV) remote sensing is applied in small area aerial photogrammetry surveying, disaster monitoring and emergency command, 3D urban construction and other fields, the image processing of UAV has become a hot topic in current research. The precise matching of UAV image is a key problem, which affects the subsequent processing precision directly, such as 3D reconstruction and automatic aerial triangulation, etc. At present, SIFT (Scale Invariant Feature Transform) algorithm proposed by DAVID G. LOWE as the main method is, is widely used in image matching, since its strong stability to image rotation, shift, scaling, and the change of illumination conditions. It has been successfully applied in target recognition, SFM (Structure from Motion), and many other fields. SIFT algorithm needs the colour images to be converted into grayscale images, detects extremum points under different scales and uses neighbourhood pixels to generate descriptor. As we all know that UAV images with rich colour information, the SIFT algorithm improved through combining with the image colour information in this paper, the experiments are conducted from matching efficiency and accuracy compared with the original SIFT algorithm. The results show that the method which proposed in this paper decreases on the efficiency, but is improved on the precision and provides a basis choice for matching method.

  16. The optical synthetic aperture image restoration based on the improved maximum-likelihood algorithm

    NASA Astrophysics Data System (ADS)

    Geng, Zexun; Xu, Qing; Zhang, Baoming; Gong, Zhihui

    2012-09-01

    Optical synthetic aperture imaging (OSAI) can be envisaged in the future for improving the image resolution from high altitude orbits. Several future projects are based on optical synthetic aperture for science or earth observation. Comparing with equivalent monolithic telescopes, however, the partly filled aperture of OSAI induces the attenuation of the modulation transfer function of the system. Consequently, images acquired by OSAI instrument have to be post-processed to restore ones equivalent in resolution to that of a single filled aperture. The maximum-likelihood (ML) algorithm proposed by Benvenuto performed better than traditional Wiener filter did, but it didn't work stably and the point spread function (PSF), was assumed to be known and unchanged in iterative restoration. In fact, the PSF is unknown in most cases, and its estimation was expected to be updated alternatively in optimization. Facing these limitations of this method, an improved ML (IML) reconstruction algorithm was proposed in this paper, which incorporated PSF estimation by means of parameter identification into ML, and updated the PSF successively during iteration. Accordingly, the IML algorithm converged stably and reached better results. Experiment results showed that the proposed algorithm performed much better than ML did in peak signal to noise ratio, mean square error and the average contrast evaluation indexes.

  17. Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas

    NASA Technical Reports Server (NTRS)

    Smith, Barbara M.; Bennett, Sean

    1992-01-01

    A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.

  18. Improvement and further development of SSM/I overland parameter algorithms using the WetNet workstation

    NASA Technical Reports Server (NTRS)

    Neale, Christopher M. U.; Mcdonnell, Jeffrey J.; Ramsey, Douglas; Hipps, Lawrence; Tarboton, David

    1993-01-01

    Since the launch of the DMSP Special Sensor Microwave/Imager (SSM/I), several algorithms have been developed to retrieve overland parameters. These include the present operational algorithms resulting from the Navy calibration/validation effort such as land surface type (Neale et al. 1990), land surface temperature (McFarland et al. 1990), surface moisture (McFarland and Neale, 1991) and snow parameters (McFarland and Neale, 1991). In addition, other work has been done including the classification of snow cover and precipitation using the SSM/I (Grody, 1991). Due to the empirical nature of most of the above mentioned algorithms, further research is warranted and improvements can probably be obtained through a combination of radiative transfer modelling to study the physical processes governing the microwave emissions at the SSM/I frequencies, and the incorporation of additional ground truth data and special cases into the regression data sets. We have proposed specifically to improve the retrieval of surface moisture and snow parameters using the WetNet SSM/I data sets along with ground truth information namely climatic variables from the NOAA cooperative network of weather stations as well as imagery from other satellite sensors such as the AVHRR and Thematic Mapper. In the case of surface moisture retrievals the characterization of vegetation density is of primary concern. The higher spatial resolution satellite imagery collected at concurrent periods will be used to characterize vegetation types and amounts which, along with radiative transfer modelling should lead to more physically based retrievals. Snow parameter retrieval algorithm improvement will initially concentrate on the classification of snowpacks (dry snow, wet snow, refrozen snow) and later on specific products such as snow water equivalent. Significant accomplishments in the past year are presented.

  19. Validation and Improvement of CERES Surface Radiation Budget Algorithms: Extension of Dusty and Cloudy Scenes

    NASA Technical Reports Server (NTRS)

    Ramanathan, V.; Inamdar, Anand K.

    2005-01-01

    Our main task was to validate and improve the generation of surface long wave fluxes from the CERES TOA window channel flux measurements. We completed this task successfully for the clear sky fluxes in the presence of aerosols including dust during the first year of the project. The algorithm we developed for CERES was remarkably successful for clear sky fluxes and we have no further tasks that need to be performed past the requested termination date of December 31, 2004. We found that the information contained in the TOA fluxes was not sufficient to improve upon the current CERES algorithm for cloudy sky fluxes. Given this development and given our success in clear sky fluxes, we do not see any reason to continue our validation work beyond what we have completed. Specific details are given.

  20. Improved evolutionary algorithm for the global optimization of clusters with competing attractive and repulsive interactions

    NASA Astrophysics Data System (ADS)

    Cruz, S. M. A.; Marques, J. M. C.; Pereira, F. B.

    2016-10-01

    We propose improvements to our evolutionary algorithm (EA) [J. M. C. Marques and F. B. Pereira, J. Mol. Liq. 210, 51 (2015)] in order to avoid dissociative solutions in the global optimization of clusters with competing attractive and repulsive interactions. The improved EA outperforms the original version of the method for charged colloidal clusters in the size range 3 ≤ N ≤ 25, which is a very stringent test for global optimization algorithms. While the Bernal spiral is the global minimum for clusters in the interval 13 ≤ N ≤ 18, the lowest-energy structure is a peculiar, so-called beaded-necklace, motif for 19 ≤ N ≤ 25. We have also applied the method for larger sizes and unusual quasi-linear and branched clusters arise as low-energy structures.

  1. Significant Improvements in the Practice Patterns of Adult Related Donor Care in US Transplantation Centers.

    PubMed

    Anthias, Chloe; Shaw, Bronwen E; Kiefer, Deidre M; Liesveld, Jane L; Yared, Jean; Kamble, Rammurti T; D'Souza, Anita; Hematti, Peiman; Seftel, Matthew D; Norkin, Maxim; DeFilipp, Zachariah; Kasow, Kimberly A; Abidi, Muneer H; Savani, Bipin N; Shah, Nirali N; Anderlini, Paolo; Diaz, Miguel A; Malone, Adriana K; Halter, Joerg P; Lazarus, Hillard M; Logan, Brent R; Switzer, Galen E; Pulsipher, Michael A; Confer, Dennis L; O'Donnell, Paul V

    2016-03-01

    Recent investigations have found a higher incidence of adverse events associated with hematopoietic cell donation in related donors (RDs) who have morbidities that if present in an unrelated donor (UD) would preclude donation. In the UD setting, regulatory standards ensure independent assessment of donors, one of several crucial measures to safeguard donor health and safety. A survey conducted by the Center for International Blood and Marrow Transplant Research (CIBMTR) Donor Health and Safety Working Committee in 2007 reported a potential conflict of interest in >70% of US centers, where physicians had simultaneous responsibility for RDs and their recipients. Consequently, several international organizations have endeavored to improve practice through regulations and consensus recommendations. We hypothesized that the changes in the 2012 Foundation for the Accreditation of Cellular Therapy and the Joint Accreditation Committee-International Society for Cellular Therapy and European Society for Blood and Marrow Transplantation standards resulting from the CIBMTR study would have significantly impacted practice. Accordingly, we conducted a follow-up survey of US transplantation centers to assess practice changes since 2007, and to investigate additional areas where RD care was predicted to differ from UD care. A total of 73 centers (53%), performing 79% of RD transplantations in the United States, responded. Significant improvements were observed since the earlier survey; 62% centers now ensure separation of RD and recipient care (P < .0001). This study identifies several areas where RD management does not meet international donor care standards, however. Particular concerns include counseling and assessment of donors before HLA typing, with 61% centers first disclosing donor HLA results to an individual other than the donor, the use of unlicensed mobilization agents, and the absence of long-term donor follow-up. Recommendations for improvement are made.

  2. Significant improvements in the practice patterns of adult related donor care in US transplant centers

    PubMed Central

    MBChB, Chloe Anthias; Shaw, Bronwen E; Kiefer, Deidre M; Liesveld, Jane L; Yared, Jean; Kambl, Rammurti T; D'Souza, Anita; Hematti, Peiman; Seftel, Matthew D; Norkin, Maxim; DeFilipp, Zachariah M; Kasow, Kimberly A; Abidi, Muneer H; Savani, Bipin N; Shah, Nirali N; Anderlini, Paolo; Diaz, Miguel A; Malone, Adriana K; Halter, Joerg P; Lazarus, Hillard M; Logan, Brent R; Switzer, Galen E; Pulsipher, Michael A; Confer, Dennis L; O'Donnell, Paul V

    2016-01-01

    Recent investigations have found a higher incidence of adverse events associated with hematopoietic cell donation in related donors (RDs) who have morbidities that if present in an unrelated donor (UD) would preclude donation. In the UD setting, regulatory standards ensure independent assessment of donors, one of several crucial measures to safeguard donor health and safety. A survey conducted by the Center for International Blood and Marrow Transplant Research (CIBMTR) Donor Health and Safety Working Committee in 2007 reported a potential conflict of interest in >70% US centers, where physicians had simultaneous responsibility for RDs and their recipients. Consequently, several international organizations have endeavored to improve practice through regulations and consensus recommendations. We hypothesized that the changes in the 2012 FACT-JACIE Standards, resulting from the CIBMTR study, will have significantly impacted practice. Accordingly, a follow-up survey of US transplant centers was conducted to assess practice changes since 2007, and investigate additional areas where RD care was predicted to differ from UD care. 73 centers (53%), performing 79% of US RD transplants responded. Significant improvements were observed since the earlier survey; 62% centers now ensure separation of RD and recipient care (P<0.0001). However, this study identifies several areas where RD management does not meet international donor care standards. Particular concerns include counseling and assessment of donors before HLA typing, with 61% centers first disclosing donor HLA results to an individual other than the donor, the use of unlicensed mobilization agents, and the absence of long-term donor follow-up. Recommendations for improvement are described. PMID:26597080

  3. Improvement and Refinement of the GPS/MET Data Analysis Algorithm

    NASA Technical Reports Server (NTRS)

    Herman, Benjamin M.

    2003-01-01

    The GPS/MET project was a satellite-to-satellite active microwave atmospheric limb sounder using the Global Positioning System transmitters as signal sources. Despite its remarkable success, GPS/MET could not independently sense atmospheric water vapor and ozone. Additionally the GPS/MET data retrieval algorithm needs to be further improved and refined to enhance the retrieval accuracies in the lower tropospheric region and the upper stratospheric region. The objectives of this proposal were to address these 3 problem areas.

  4. On Improved Exact Algorithms for L(2,1)-Labeling of Graphs

    NASA Astrophysics Data System (ADS)

    Junosza-Szaniawski, Konstanty; Rzążewski, Paweł

    L(2,1)-labeling is graph labeling model where adjacent vertices get labels that differ by at least 2 and vertices in distance 2 get different labels. In this paper we present an algorithm for finding an optimal L(2,1)-labeling (i.e. an L(2,1)-labeling in which largest label is the least possible) of a graph with time complexity O * ( 3.5616 n ), which improves a previous best result: O * ( 3.8739 n ).

  5. Improved sampling and validation of frozen Gaussian approximation with surface hopping algorithm for nonadiabatic dynamics

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Zhou, Zhennan

    2016-09-01

    In the spirit of the fewest switches surface hopping, the frozen Gaussian approximation with surface hopping (FGA-SH) method samples a path integral representation of the non-adiabatic dynamics in the semiclassical regime. An improved sampling scheme is developed in this work for FGA-SH based on birth and death branching processes. The algorithm is validated for the standard test examples of non-adiabatic dynamics.

  6. Carfilzomib significantly improves the progression-free survival of high-risk patients in multiple myeloma

    PubMed Central

    Fonseca, Rafael; Siegel, David; Dimopoulos, Meletios A.; Špička, Ivan; Masszi, Tamás; Hájek, Roman; Rosiñol, Laura; Goranova-Marinova, Vesselina; Mihaylov, Georgi; Maisnar, Vladimír; Mateos, Maria-Victoria; Wang, Michael; Niesvizky, Ruben; Oriol, Albert; Jakubowiak, Andrzej; Minarik, Jiri; Palumbo, Antonio; Bensinger, William; Kukreti, Vishal; Ben-Yehuda, Dina; Stewart, A. Keith; Obreja, Mihaela; Moreau, Philippe

    2016-01-01

    The presence of certain high-risk cytogenetic abnormalities, such as translocations (4;14) and (14;16) and deletion (17p), are known to have a negative impact on survival in multiple myeloma (MM). The phase 3 study ASPIRE (N = 792) demonstrated that progression-free survival (PFS) was significantly improved with carfilzomib, lenalidomide, and dexamethasone (KRd), compared with lenalidomide and dexamethasone (Rd) in relapsed MM. This preplanned subgroup analysis of ASPIRE was conducted to evaluate KRd vs Rd by baseline cytogenetics according to fluorescence in situ hybridization. Of 417 patients with known cytogenetic risk status, 100 patients (24%) were categorized with high-risk cytogenetics (KRd, n = 48; Rd, n = 52) and 317 (76%) were categorized with standard-risk cytogenetics (KRd, n = 147; Rd, n = 170). For patients with high-risk cytogenetics, treatment with KRd resulted in a median PFS of 23.1 months, a 9-month improvement relative to treatment with Rd. For patients with standard-risk cytogenetics, treatment with KRd led to a 10-month improvement in median PFS vs Rd. The overall response rates for KRd vs Rd were 79.2% vs 59.6% (high-risk cytogenetics) and 91.2% vs 73.5% (standard-risk cytogenetics); approximately fivefold as many patients with high- or standard-risk cytogenetics achieved a complete response or better with KRd vs Rd (29.2% vs 5.8% and 38.1% vs 6.5%, respectively). KRd improved but did not abrogate the poor prognosis associated with high-risk cytogenetics. This regimen had a favorable benefit-risk profile in patients with relapsed MM, irrespective of cytogenetic risk status, and should be considered a standard of care in these patients. This trial was registered at www.clinicaltrials.gov as #NCT01080391. PMID:27439911

  7. An improved scheduling algorithm for 3D cluster rendering with platform LSF

    NASA Astrophysics Data System (ADS)

    Xu, Wenli; Zhu, Yi; Zhang, Liping

    2013-10-01

    High-quality photorealistic rendering of 3D modeling needs powerful computing systems. On this demand highly efficient management of cluster resources develops fast to exert advantages. This paper is absorbed in the aim of how to improve the efficiency of 3D rendering tasks in cluster. It focuses research on a dynamic feedback load balance (DFLB) algorithm, the work principle of load sharing facility (LSF) and optimization of external scheduler plug-in. The algorithm can be applied into match and allocation phase of a scheduling cycle. Candidate hosts is prepared in sequence in match phase. And the scheduler makes allocation decisions for each job in allocation phase. With the dynamic mechanism, new weight is assigned to each candidate host for rearrangement. The most suitable one will be dispatched for rendering. A new plugin module of this algorithm has been designed and integrated into the internal scheduler. Simulation experiments demonstrate the ability of improved plugin module is superior to the default one for rendering tasks. It can help avoid load imbalance among servers, increase system throughput and improve system utilization.

  8. An Improved PID Algorithm Based on Insulin-on-Board Estimate for Blood Glucose Control with Type 1 Diabetes.

    PubMed

    Hu, Ruiqiang; Li, Chengwei

    2015-01-01

    Automated closed-loop insulin infusion therapy has been studied for many years. In closed-loop system, the control algorithm is the key technique of precise insulin infusion. The control algorithm needs to be designed and validated. In this paper, an improved PID algorithm based on insulin-on-board estimate is proposed and computer simulations are done using a combinational mathematical model of the dynamics of blood glucose-insulin regulation in the blood system. The simulation results demonstrate that the improved PID algorithm can perform well in different carbohydrate ingestion and different insulin sensitivity situations. Compared with the traditional PID algorithm, the control performance is improved obviously and hypoglycemia can be avoided. To verify the effectiveness of the proposed control algorithm, in silico testing is done using the UVa/Padova virtual patient software.

  9. GRISOTTO: A greedy approach to improve combinatorial algorithms for motif discovery with prior knowledge

    PubMed Central

    2011-01-01

    Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505

  10. Intensity-Modulated Radiation Therapy Significantly Improves Acute Gastrointestinal Toxicity in Pancreatic and Ampullary Cancers

    SciTech Connect

    Yovino, Susannah; Poppe, Matthew; Jabbour, Salma; David, Vera; Garofalo, Michael; Pandya, Naimesh; Alexander, Richard; Hanna, Nader; Regine, William F.

    2011-01-01

    Purpose: Among patients with upper abdominal malignancies, intensity-modulated radiation therapy (IMRT) can improve dose distributions to critical dose-limiting structures near the target. Whether these improved dose distributions are associated with decreased toxicity when compared with conventional three-dimensional treatment remains a subject of investigation. Methods and Materials: 46 patients with pancreatic/ampullary cancer were treated with concurrent chemoradiation (CRT) using inverse-planned IMRT. All patients received CRT based on 5-fluorouracil in a schema similar to Radiation Therapy Oncology Group (RTOG) 97-04. Rates of acute gastrointestinal (GI) toxicity for this series of IMRT-treated patients were compared with those from RTOG 97-04, where all patients were treated with three-dimensional conformal techniques. Chi-square analysis was used to determine if there was a statistically different incidence in acute GI toxicity between these two groups of patients. Results: The overall incidence of Grade 3-4 acute GI toxicity was low in patients receiving IMRT-based CRT. When compared with patients who had three-dimensional treatment planning (RTOG 97-04), IMRT significantly reduced the incidence of Grade 3-4 nausea and vomiting (0% vs. 11%, p = 0.024) and diarrhea (3% vs. 18%, p = 0.017). There was no significant difference in the incidence of Grade 3-4 weight loss between the two groups of patients. Conclusions: IMRT is associated with a statistically significant decrease in acute upper and lower GI toxicity among patients treated with CRT for pancreatic/ampullary cancers. Future clinical trials plan to incorporate the use of IMRT, given that it remains a subject of active investigation.

  11. Improved CICA algorithm used for single channel compound fault diagnosis of rolling bearings

    NASA Astrophysics Data System (ADS)

    Chen, Guohua; Qie, Longfei; Zhang, Aijun; Han, Jin

    2016-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to realize single channel compound fault diagnosis of bearings and improve the diagnosis accuracy, an improved CICA algorithm named constrained independent component analysis based on the energy method (E-CICA) is proposed. With the approach, the single channel vibration signal is firstly decomposed into several wavelet coefficients by discrete wavelet transform(DWT) method for the purpose of obtaining multichannel signals. Then the envelope signals of the reconstructed wavelet coefficients are selected as the input of E-CICA algorithm, which fulfills the requirements that the number of sensors is greater than or equal to that of the source signals and makes it more suitable to be processed by CICA strategy. The frequency energy ratio(ER) of each wavelet reconstructed signal to the total energy of the given synchronous signal is calculated, and then the synchronous signal with maximum ER value is set as the reference signal accordingly. By this way, the reference signal contains a priori knowledge of fault source signal and the influence on fault signal extraction accuracy which is caused by the initial phase angle and the duty ratio of the reference signal in the traditional CICA algorithm is avoided. Experimental results show that E-CICA algorithm can effectively separate out the outer-race defect and the rollers defect from the single channel compound fault and fulfill the needs of compound fault diagnosis of rolling bearings, and the running time is 0.12% of that of the traditional CICA algorithm and the extraction accuracy is 1.4 times of that of CICA as well. The proposed research provides a new method to separate single channel compound fault signals.

  12. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  13. Significant improvement of survival by intrasplenic hepatocyte transplantation in totally hepatectomized rats.

    PubMed

    Vogels, B A; Maas, M A; Bosma, A; Chamuleau, R A

    1996-01-01

    The effect of intrasplenic hepatocyte transplantation (HTX) was studied in an experimental model of acute liver failure in rats with chronic liver atrophy. Rats underwent a portacaval shunt operation on Day -14 to induce liver atrophy, and underwent total hepatectomy on Day 0 as a start of acute liver failure. Intrasplenic hepatocyte or sham transplantation was performed on Day -7,-3, or -1 (n = 4 to 6 per group). During the period following hepatectomy, mean arterial blood pressure was maintained above 80 mm Hg and hypoglycaemia was prevented. Severity of hepatic encephalopathy was assessed by clinical grading and EEG spectral analysis, together with determination of blood ammonia and plasma amino acid concentrations, and "survival" time. Histological examination of the spleen and lungs was performed after sacrifice. Intrasplenic hepatocyte transplantation resulted in a significant improvement in clinical grading in all transplanted groups (p < 0.05), whereas a significant improvement in EEG left index was seen only in the group with transplantation on Day -1 (p < 0.05). In contrast to hepatocyte transplantation 1 day before total hepatectomy, rats with hepatocyte transplantation 3 and 7 days before total hepatectomy showed a significant 3- and 2-fold increase in "survival" time compared to sham transplanted controls: HTX at Day -1: 7.5 +/- 0.3 h vs. 5.9 +/- 0.6 h (p > 0.05), HTX at Day -3: 19.7 +/- 3.7 h vs. 6.5 +/- 0.3 h (p < 0.05), and HTX at Day -7: 13.8 +/- 3.2 h vs. 6.3 +/- 0.3 h (p < 0.05). Furthermore, rats with hepatocyte transplantation on Day -3 and -7 showed significantly lower blood ammonia concentrations after total hepatectomy (p < 0.0001). Histological examination of the spleens after sacrifice showed clusters of hepatocytes in the red pulp. Hepatocytes present in the spleen for 3 and 7 days showed bile accumulation and spots of beginning necrosis. The present data show that in a hard model of complete liver failure in portacaval shunted rats

  14. Improved Algorithms for Accurate Retrieval of UV - Visible Diffuse Attenuation Coefficients in Optically Complex, Inshore Waters

    NASA Technical Reports Server (NTRS)

    Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.

    2014-01-01

    Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This

  15. Improved Determination of Surface and Atmospheric Temperatures Using Only Shortwave AIRS Channels: The AIRS Version 6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2010-01-01

    AIRS was launched on EOS Aqua on May 4, 2002 together with ASMU-A and HSB to form a next generation polar orbiting infrared and microwave atmosphere sounding system (Pagano et al 2003). The theoretical approach used to analyze AIRS/AMSU/HSB data in the presence of clouds in the AIRS Science Team Version 3 at-launch algorithm, and that used in the Version 4 post-launch algorithm, have been published previously. Significant theoretical and practical improvements have been made in the analysis of AIRS/AMSU data since the Version 4 algorithm. Most of these have already been incorporated in the AIRS Science Team Version 5 algorithm (Susskind et al 2010), now being used operationally at the Goddard DISC. The AIRS Version 5 retrieval algorithm contains three significant improvements over Version 4. Improved physics in Version 5 allowed for use of AIRS clear column radiances (R(sub i)) in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations were used primarily in the generation of clear column radiances (R(sub i)) for all channels. This new approach allowed for the generation of accurate Quality Controlled values of R(sub i) and T(p) under more stressing cloud conditions. Secondly, Version 5 contained a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 contained for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Susskind et al 2010 shows that Version 5 AIRS Only sounding are only slightly degraded from the AIRS/AMSU soundings, even at large fractional cloud

  16. Liquid human milk fortifier significantly improves docosahexaenoic and arachidonic acid status in preterm infants.

    PubMed

    Berseth, C L; Harris, C L; Wampler, J L; Hoffman, D R; Diersen-Schade, D A

    2014-09-01

    We report the fatty acid composition of mother׳s own human milk from one of the largest US cohorts of lactating mothers of preterm infants. Milk fatty acid data were used as a proxy for intake at enrollment in infants (n=150) who received human milk with a powder human milk fortifier (HMF; Control) or liquid HMF [LHMF; provided additional 12mg docosahexaenoic acid (DHA), 20mg arachidonic acid (ARA)/100mL human milk]. Mothers provided milk samples (n=129) and reported maternal DHA consumption (n=128). Infant blood samples were drawn at study completion (Study Day 28). Human milk and infant PPL fatty acids were analyzed using capillary column gas chromatography. DHA and ARA were within ranges previously published for US term and preterm human milk. Compared to Control HMF (providing no DHA or ARA), human milk fortified with LHMF significantly increased infant PPL DHA and ARA and improved preterm infant DHA and ARA status.

  17. Implementation of a novel emergency surgical unit significantly improves the management of gallstone pancreatitis

    PubMed Central

    Kulendran, M; Liasis, L; Qurashi, K; Sen, M; Gould, S

    2015-01-01

    Introduction Emergency surgery is changing rapidly with a greater workload, early subspecialisation and centralisation of emergency care. We describe the impact of a novel emergency surgical unit (ESU) on the definitive management of patients with gallstone pancreatitis (GSP). Methods A comparative audit was undertaken for all admissions with GSP before and after the introduction of the ESU over a six-month period. The impact on compliance with British Society of Gastroenterology (BSG) guidelines was assessed. Results Thirty-five patients were treated for GSP between December 2013 and May 2014, after the introduction of the ESU. This was twice the nationally reported average for a UK trust over a six-month period. All patients received definitive management for their GSP and 100% of all suitable patients received treatment during the index admission or within two weeks of discharge. This was a significantly greater proportion than that prior to the introduction of the ESU (57%, p=0.0001) as well as the recently reported national average (34%). The mean length of total inpatient stay was reduced significantly after the ESU was introduced from 13.7 ± 4.7 days to 7.8 ± 2.1 days (p=0.03). The mean length of postoperative stay also fell significantly from 6.7 ± 2.6 days to 1.8 ± 0.8 days (p=0.001). Conclusions A dedicated ESU following national recommendations for emergency surgery care by way of using dedicated emergency surgeons and a streamlined protocol for common presentations has been shown by audit of current practice to significantly improve the management of patients presenting to a busy district general hospital with GSP. PMID:26263941

  18. Significantly improved cyclability of lithium manganese oxide under elevated temperature by an easily oxidized electrolyte additive

    NASA Astrophysics Data System (ADS)

    Zhu, Yunmin; Rong, Haibo; Mai, Shaowei; Luo, Xueyi; Li, Xiaoping; Li, Weishan

    2015-12-01

    Spinel lithium manganese oxide, LiMn2O4, is a promising cathode for lithium ion battery in large-scale applications, because it possesses many advantages compared with currently used layered lithium cobalt oxide (LiCoO2) and olivine phosphate (LiFePO4), including naturally abundant resource, environmental friendliness and high and long work potential plateau. Its poor cyclability under high temperature, however, limits its application. In this work, we report a significant cyclability improvement of LiMn2O4 under elevated temperature by using dimethyl phenylphonite (DMPP) as an electrolyte additive. Charge/discharge tests demonstrate that the application of 0.5 wt.% DMPP yields a capacity retention improvement from 16% to 82% for LiMn2O4 after 200 cycles under 55 °C at 1 C (1C = 148 mAh g-1) between 3 and 4.5 V. Electrochemical and physical characterizations indicate that DMPP is electrochemically oxidized at the potential lower than that for lithium extraction, forming a protective cathode interphase on LiMn2O4, which suppresses the electrolyte decomposition and prevents LiMn2O4 from crystal destruction.

  19. The Sensitivity of Adolescent School-Based Hearing Screens Is Significantly Improved by Adding High Frequencies.

    PubMed

    Sekhar, Deepa L; Zalewski, Thomas R; Beiler, Jessica S; Czarnecki, Beth; Barr, Ashley L; King, Tonya S; Paul, Ian M

    2016-12-01

    High frequency hearing loss (HFHL), often related to hazardous noise, affects one in six U.S. adolescents. Yet, only 20 states include school-based hearing screens for adolescents. Only six states test multiple high frequencies. Study objectives were to (1) compare the sensitivity of state school-based hearing screens for adolescents to gold standard sound-treated booth testing and (2) consider the effect of adding multiple high frequencies and two-step screening on sensitivity/specificity. Of 134 eleventh-grade participants (2013-2014), 43 of the 134 (32%) did not pass sound-treated booth testing, and 27 of the 43 (63%) had HFHL. Sensitivity/specificity of the most common protocol (1,000, 2,000, 4,000 Hz at 20 dB HL) for these hearing losses was 25.6% (95% confidence interval [CI] = [13.5, 41.2]) and 85.7% (95% CI [76.8, 92.2]), respectively. A protocol including 500, 1,000, 2,000, 4,000, 6,000 Hz at 20 dB HL significantly improved sensitivity to 76.7% (95% CI [61.4, 88.2]), p < .001. Two-step screening maintained specificity (84.6%, 95% CI [75.5, 91.3]). Adolescent school-based hearing screen sensitivity improves with high frequencies.

  20. Use of genetic algorithms to improve the solid waste collection service in an urban area.

    PubMed

    Buenrostro-Delgado, Otoniel; Ortega-Rodriguez, Juan Manuel; Clemitshaw, Kevin C; González-Razo, Carlos; Hernández-Paniagua, Iván Y

    2015-07-01

    Increasing generation of Urban Solid Waste (USW) has become a significant issue in developing countries due to unprecedented population growth and high rates of urbanisation. This issue has exceeded current plans and programs of local governments to manage and dispose of USW. In this study, a Genetic Algorithm for Rule-set Production (GARP) integrated into a Geographic Information System (GIS) was used to find areas with socio-economic conditions that are representative of the generation of USW constituents in such areas. Socio-economic data of selected variables categorised by Basic Geostatistical Areas (BGAs) were taken from the 2000 National Population Census (NPC). USW and additional socio-economic data were collected during two survey campaigns in 1998 and 2004. Areas for sampling of USW were stratified into lower, middle and upper economic strata according to income. Data on USW constituents were analysed using descriptive statistics and Multivariate Analysis. ARC View 3.2 was used to convert the USW data and socio-economic variables to spatial data. Desk-top GARP software was run to generate a spatial model to identify areas with similar socio-economic conditions to those sampled. Results showed that socio-economic variables such as monthly income and education are positively correlated with waste constituents generated. The GARP used in this study revealed BGAs with similar socio-economic conditions to those sampled, where a similar composition of waste constituents generated is expected. Our results may be useful to decrease USW management costs by improving the collection services.

  1. Frame-layer rate control algorithm for H.264 based on improved frame MAD

    NASA Astrophysics Data System (ADS)

    Cui, Ziguan; Liu, Ningzhong

    2007-11-01

    In this paper, we present an improved frame layer rate control algorithm for H.264/AVC video coding standard. An important step in many existing rate control algorithms is to determine the target bits for each P frame. In the standard rate control scheme of H.264, the target bit number is a weighted combination of remaining bits and bits calculated from buffer regulation. The problem is that the remaining bits are allocated to all non-coded frames equally. This will cause non-uniform image quality over a video sequence. To overcome this disadvantage, first we define frame complexity ratio (FC ratio) as a measure for global frame encoding complexity and then allocate initial target bit according to its FC ratio. We define FC ratio as a weighted combination of motion complexity and texture complexity which can predict current frame complexity more accurately using the statistics of previously encoded frame and the texture information of current frame. Experiment results show that our improved algorithm can acquire more accurate quantization parameter (QP) for each P frame through the quadratic rate-distortion (R-D) model, achieve an average PSNR gain of about 0.28 dB and meanwhile effectively alleviate the buffer's fluctuating range and frame PSNR variation.

  2. Improved Progressive Polynomial Algorithm for Self-Adjustment and Optimal Response in Intelligent Sensors

    PubMed Central

    Rivera, José; Herrera, Gilberto; Chacón, Mario; Acosta, Pedro; Carrillo, Mariano

    2008-01-01

    The development of intelligent sensors involves the design of reconfigurable systems capable of working with different input sensors signals. Reconfigurable systems should expend the least possible amount of time readjusting. A self-adjustment algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity with good accuracy. This paper shows the performance of a progressive polynomial algorithm utilizing different grades of relative nonlinearity of an output sensor signal. It also presents an improvement to this algorithm which obtains an optimal response with minimum nonlinearity error, based on the number and selection sequence of the readjust points. In order to verify the potential of this proposed criterion, a temperature measurement system was designed. The system is based on a thermistor which presents one of the worst nonlinearity behaviors. The application of the proposed improved method in this system showed that an adequate sequence of the adjustment points yields to the minimum nonlinearity error. In realistic applications, by knowing the grade of relative nonlinearity of a sensor, the number of readjustment points can be determined using the proposed method in order to obtain the desired nonlinearity error. This will impact on readjustment methodologies and their associated factors like time and cost. PMID:27873936

  3. Bladed wheels damage detection through Non-Harmonic Fourier Analysis improved algorithm

    NASA Astrophysics Data System (ADS)

    Neri, P.

    2017-05-01

    Recent papers introduced the Non-Harmonic Fourier Analysis for bladed wheels damage detection. This technique showed its potential in estimating the frequency of sinusoidal signals even when the acquisition time is short with respect to the vibration period, provided that some hypothesis are fulfilled. Anyway, previously proposed algorithms showed severe limitations in cracks detection at their early stage. The present paper proposes an improved algorithm which allows to detect a blade vibration frequency shift due to a crack whose size is really small compared to the blade width. Such a technique could be implemented for condition-based maintenance, allowing to use non-contact methods for vibration measurements. A stator-fixed laser sensor could monitor all the blades as they pass in front of the spot, giving precious information about the wheel health. This configuration determines an acquisition time for each blade which become shorter as the machine rotational speed increases. In this situation, traditional Discrete Fourier Transform analysis results in poor frequency resolution, being not suitable for small frequency shift detection. Non-Harmonic Fourier Analysis instead showed high reliability in vibration frequency estimation even with data samples collected in a short time range. A description of the improved algorithm is provided in the paper, along with a comparison with the previous one. Finally, a validation of the method is presented, based on finite element simulations results.

  4. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images

    PubMed Central

    Fan, Chong; Chen, Xushuai; Zhong, Lei; Zhou, Min; Shi, Yun; Duan, Yulin

    2017-01-01

    A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line. PMID:28335482

  5. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  6. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.

  7. MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm based on ICPF

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Xu, Shiyou; Chen, Zengping; Yuan, Bin

    2014-12-01

    In this paper, we present a detailed analysis on the performance degradation of inverse synthetic aperture radar (ISAR) imagery with the polar format algorithm (PFA) due to the inaccurate rotation center. And a novel algorithm is developed to estimate the rotation center for ISAR targets to overcome the degradation. In real ISAR scenarios, the real rotation center shift is usually not coincided with the gravity center of the high-resolution range profile (HRRP), due to the data-driven translational motion compensation. Because of the imprecise information of rotation center, PFA image yields model errors and severe blurring in the cross-range direction. To tackle this problem, an improved PFA based on integrated cubic phase function (ICPF) is proposed. In the method, the rotation center in the slant range is estimated firstly by ICPF, and the signal is shifted accordingly. Finally, the standard PFA algorithm can be carried out straightforwardly. With the proposed method, wide-angle ISAR imagery of non-cooperative targets can be achieved by PFA with improved focus quality. Simulation and real-data experiments confirm the effectiveness of the proposal.

  8. Improving lesion detectability in PET imaging with a penalized likelihood reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Wangerin, Kristen A.; Ahn, Sangtae; Ross, Steven G.; Kinahan, Paul E.; Manjeshwar, Ravindra M.

    2015-03-01

    Ordered Subset Expectation Maximization (OSEM) is currently the most widely used image reconstruction algorithm for clinical PET. However, OSEM does not necessarily provide optimal image quality, and a number of alternative algorithms have been explored. We have recently shown that a penalized likelihood image reconstruction algorithm using the relative difference penalty, block sequential regularized expectation maximization (BSREM), achieves more accurate lesion quantitation than OSEM, and importantly, maintains acceptable visual image quality in clinical wholebody PET. The goal of this work was to evaluate lesion detectability with BSREM versus OSEM. We performed a twoalternative forced choice study using 81 patient datasets with lesions of varying contrast inserted into the liver and lung. At matched imaging noise, BSREM and OSEM showed equivalent detectability in the lungs, and BSREM outperformed OSEM in the liver. These results suggest that BSREM provides not only improved quantitation and clinically acceptable visual image quality as previously shown but also improved lesion detectability compared to OSEM. We then modeled this detectability study, applying both nonprewhitening (NPW) and channelized Hotelling (CHO) model observers to the reconstructed images. The CHO model observer showed good agreement with the human observers, suggesting that we can apply this model to future studies with varying simulation and reconstruction parameters.

  9. Aerodynamic Improvements of an Empty Timber Truck can Have the Potential of Significantly Reducing Fuel Consumption

    NASA Astrophysics Data System (ADS)

    Andersson, Magnus; Marashi, Seyedeh Sepideh; Karlsson, Matts

    2012-11-01

    In the present study, aerodynamic drag (AD) has been estimated for an empty and a fully loaded conceptual timber truck (TT) using Computational Fluid Dynamics (CFD). The increasing fuel prices have challenged heavy duty vehicle (HDV) manufactures to strive for better fuel economy, by e.g. utilizing drag reducing external devices. Despite this knowledge, the TT fleets seem to be left in the dark. Like HDV aerodynamics, similarities can be observed as a large low pressure wake is formed behind the tractor (unloaded) and downstream of the trailer (full load) thus generating AD. As TTs travel half the time without any cargo, focus on drag reduction is important. The full scaled TTs where simulated using the realizable k-epsilon model with grid adaption techniques for mesh independence. Our results indicate that a loaded TT reduces the AD significantly as both wake size and turbulence kinetic energy are lowered. In contrast to HDV the unloaded TTs have a much larger design space available for possible drag reducing devices, e.g. plastic wrapping and/or flaps. This conceptual CFD study has given an indication of the large AD difference between the unloaded and fully loaded TT, showing the potential for significant AD improvements.

  10. Significant improvement in one-dimensional cursor control using Laplacian electroencephalography over electroencephalography

    NASA Astrophysics Data System (ADS)

    Boudria, Yacine; Feltane, Amal; Besio, Walter

    2014-06-01

    Objective. Brain-computer interfaces (BCIs) based on electroencephalography (EEG) have been shown to accurately detect mental activities, but the acquisition of high levels of control require extensive user training. Furthermore, EEG has low signal-to-noise ratio and low spatial resolution. The objective of the present study was to compare the accuracy between two types of BCIs during the first recording session. EEG and tripolar concentric ring electrode (TCRE) EEG (tEEG) brain signals were recorded and used to control one-dimensional cursor movements. Approach. Eight human subjects were asked to imagine either ‘left’ or ‘right’ hand movement during one recording session to control the computer cursor using TCRE and disc electrodes. Main results. The obtained results show a significant improvement in accuracies using TCREs (44%-100%) compared to disc electrodes (30%-86%). Significance. This study developed the first tEEG-based BCI system for real-time one-dimensional cursor movements and showed high accuracies with little training.

  11. Toward 'smart' DNA microarrays: algorithms for improving data quality and statistical inference

    NASA Astrophysics Data System (ADS)

    Bakewell, David J. G.; Wit, Ernst

    2007-12-01

    DNA microarrays are a laboratory tool for understanding biological processes at the molecular scale and future applications of this technology include healthcare, agriculture, and environment. Despite their usefulness, however, the information microarrays make available to the end-user is not used optimally, and the data is often noisy and of variable quality. This paper describes the use of hierarchical Maximum Likelihood Estimation (MLE) for generating algorithms that improve the quality of microarray data and enhance statistical inference about gene behavior. The paper describes examples of recent work that improves microarray performance, demonstrated using data from both Monte Carlo simulations and published experiments. One example looks at the variable quality of cDNA spots on a typical microarray surface. It is shown how algorithms, derived using MLE, are used to "weight" these spots according to their morphological quality, and subsequently lead to improved detection of gene activity. Another example, briefly discussed, addresses the "noisy data about too many genes" issue confronting many analysts who are also interested in the collective action of a group of genes, often organized as a pathway or complex. Preliminary work is described where MLE is used to "share" variance information across a pre-assigned group of genes of interest, leading to improved detection of gene activity.

  12. Improved near-infrared ocean reflectance correction algorithm for satellite ocean color data processing.

    PubMed

    Jiang, Lide; Wang, Menghua

    2014-09-08

    A new approach for the near-infrared (NIR) ocean reflectance correction in atmospheric correction for satellite ocean color data processing in coastal and inland waters is proposed, which combines the advantages of the three existing NIR ocean reflectance correction algorithms, i.e., Bailey et al. (2010) [Opt. Express18, 7521 (2010)Appl. Opt.39, 897 (2000)Opt. Express20, 741 (2012)], and is named BMW. The normalized water-leaving radiance spectra nLw(λ) obtained from this new NIR-based atmospheric correction approach are evaluated against those obtained from the shortwave infrared (SWIR)-based atmospheric correction algorithm, as well as those from some existing NIR atmospheric correction algorithms based on several case studies. The scenes selected for case studies are obtained from two different satellite ocean color sensors, i.e., the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua and the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP), with an emphasis on several turbid water regions in the world. The new approach has shown to produce nLw(λ) spectra most consistent with the SWIR results among all NIR algorithms. Furthermore, validations against the in situ measurements also show that in less turbid water regions the new approach produces reasonable and similar results comparable to the current operational algorithm. In addition, by combining the new NIR atmospheric correction with the SWIR-based approach, the new NIR-SWIR atmospheric correction can produce further improved ocean color products. The new NIR atmospheric correction can be implemented in a global operational satellite ocean color data processing system.

  13. Improvement of relief algorithm to prevent inpatient's downfall accident with night-vision CCD camera

    NASA Astrophysics Data System (ADS)

    Matsuda, Noriyuki; Yamamoto, Takeshi; Miwa, Masafumi; Nukumi, Shinobu; Mori, Kumiko; Kuinose, Yuko; Maeda, Etuko; Miura, Hirokazu; Taki, Hirokazu; Hori, Satoshi; Abe, Norihiro

    2005-12-01

    "ROSAI" hospital, Wakayama City in Japan, reported that inpatient's bed-downfall is one of the most serious accidents in hospital at night. Many inpatients have been having serious damages from downfall accidents from a bed. To prevent accidents, the hospital tested several sensors in a sickroom to send warning-signal of inpatient's downfall accidents to a nurse. However, it sent too much inadequate wrong warning about inpatients' sleeping situation. To send a nurse useful information, precise automatic detection for an inpatient's sleeping situation is necessary. In this paper, we focus on a clustering-algorithm which evaluates inpatient's situation from multiple angles by several kinds of sensor including night-vision CCD camera. This paper indicates new relief algorithm to improve the weakness about exceptional cases.

  14. Improving the direct-methods sign-unconstrained S-FFT algorithm. XV.

    PubMed

    Rius, Jordi; Frontera, Carles

    2009-11-01

    In order to extend the application field of the direct-methods S-FFT phase-refinement algorithm to density functions with positive and negative peaks, the equal-sign constraint was removed from its definition by combining rho(2) with an appropriate density function mask [Rius & Frontera (2008). Acta Cryst. A64, 670-674]. This generalized algorithm (S(2)-FFT) was shown to be highly effective for crystal structures with at least one moderate scatterer in the unit cell but less effective when applied to structures with only light scatterers. To increase the success rate in this second case, the mask has been improved and the convergence rate of S(2)-FFT has been investigated. Finally, a closely related but simpler phase-refinement function (S(m)) combining rho (instead of rho(2)) with a new mask is introduced. For simple cases at least this can also treat density peaks in the absence of the equal-sign constraint.

  15. Branch-pipe-routing approach for ships using improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sui, Haiteng; Niu, Wentie

    2016-09-01

    Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.

  16. [Study of color blood image segmentation based on two-stage-improved FCM algorithm].

    PubMed

    Wang, Bin; Chen, Huaiqing; Huang, Hua; Rao, Jie

    2006-04-01

    This paper introduces a new method for color blood cell image segmentation based on FCM algorithm. By transforming the original blood microscopic image to indexed image, and by doing the colormap, a fuzzy apparoach to obviating the direct clustering of image pixel values, the quantity of data processing and analysis is enormously compressed. In accordance to the inherent features of color blood cell image, the segmentation process is divided into two stages. (1)confirming the number of clusters and initial cluster centers; (2) altering the distance measuring method by the distance weighting matrix in order to improve the clustering veracity. In this way, the problem of difficult convergence of FCM algorithm is solved, the iteration time of iterative convergence is reduced, the execution time of algarithm is decreased, and the correct segmentation of the components of color blood cell image is implemented.

  17. Use of a genetic algorithm to improve the rail profile on Stockholm underground

    NASA Astrophysics Data System (ADS)

    Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon

    2010-12-01

    In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.

  18. An improved algorithm for discovering the models with short loops constructs

    NASA Astrophysics Data System (ADS)

    Feng, Jianwen; Chang, Huiyou; Lin, Xuan

    2012-04-01

    The short loops constructs are common in the process models derived from the event logs in most information systems. But the current algorithms are unsatisfied when differentiating length-one loops and length-two loops if the sets of traces they can execute are identical. So, we first put forward a method based on the conformance checking techniques to handle the above problem. Next, using a Petri-net-based representation, some new ordering relations are defined to detect the short loops. At last, it is proven that an algorithm is proposed to discover the process models with short loops correctly. The improved approach in this paper can be applied in other process mining techniques.

  19. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm

    SciTech Connect

    He, Hongxing; Fang, Hengrui; Miller, Mitchell D.; Phillips, George N. Jr; Su, Wu-Pei

    2016-07-15

    An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationship of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.

  20. Using frequency analysis to improve the precision of human body posture algorithms based on Kalman filters.

    PubMed

    Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G

    2016-05-01

    With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice.

  1. Cyclosporin A significantly improves preeclampsia signs and suppresses inflammation in a rat model.

    PubMed

    Hu, Bihui; Yang, Jinying; Huang, Qian; Bao, Junjie; Brennecke, Shaun Patrick; Liu, Huishu

    2016-05-01

    Preeclampsia is associated with an increased inflammatory response. Immune suppression might be an effective treatment. The aim of this study was to examine whether Cyclosporin A (CsA), an immunosuppressant, improves clinical characteristics of preeclampsia and suppresses inflammation in a lipopolysaccharide (LPS) induced preeclampsia rat model. Pregnant rats were randomly divided into 4 groups: group 1 (PE) rats each received LPS via tail vein on gestational day (GD) 14; group 2 (PE+CsA5) rats were pretreated with LPS (1.0 μg/kg) on GD 14 and were then treated with CsA (5mg/kg, ip) on GDs 16, 17 and 18; group 3 (PE+CsA10) rats were pretreated with LPS (1.0 μg/kg) on GD 14 and were then treated with CsA (10mg/kg, ip) on GDs 16, 17 and 18; group 4 (pregnant control, PC) rats were treated with the vehicle (saline) used for groups 1, 2 and 3. Systolic blood pressure, urinary albumin, biometric parameters and the levels of serum cytokines were measured on day 20. CsA treatment significantly reduced LPS-induced systolic blood pressure and the mean 24-h urinary albumin excretion. Pro-inflammatory cytokines IL-6, IL-17, IFN-γ and TNF-α were increased in the LPS treatment group but were reduced in (LPS+CsA) group (P<0.05). Anti-inflammatory cytokine IL-4 was decreased in the LPS group but was increased in (LPS+CsA) group (P<0.05). Cyclosporine A improved preeclampsia signs and attenuated inflammatory responses in the LPS induced preeclampsia rat model which suggests that immunosuppressant might be an alternative management option for preeclampsia.

  2. Significantly Improving Regional Seismic Amplitude Tomography at Higher Frequencies by Determining S -Wave Bandwidth

    DOE PAGES

    Fisk, Mark D.; Pasyanos, Michael E.

    2016-05-03

    Characterizing regional seismic signals continues to be a difficult problem due to their variability. Calibration of these signals is very important to many aspects of monitoring underground nuclear explosions, including detecting seismic signals, discriminating explosions from earthquakes, and reliably estimating magnitude and yield. Amplitude tomography, which simultaneously inverts for source, propagation, and site effects, is a leading method of calibrating these signals. A major issue in amplitude tomography is the data quality of the input amplitude measurements. Pre-event and prephase signal-to-noise ratio (SNR) tests are typically used but can frequently include bad signals and exclude good signals. The deficiencies ofmore » SNR criteria, which are demonstrated here, lead to large calibration errors. To ameliorate these issues, we introduce a semi-automated approach to assess the bandwidth of a spectrum where it behaves physically. We determine the maximum frequency (denoted as Fmax) where it deviates from this behavior due to inflections at which noise or spurious signals start to bias the spectra away from the expected decay. We compare two amplitude tomography runs using the SNR and new Fmax criteria and show significant improvements to the stability and accuracy of the tomography output for frequency bands higher than 2 Hz by using our assessments of valid S-wave bandwidth. We compare Q estimates, P/S residuals, and some detailed results to explain the improvements. Lastly, for frequency bands higher than 4 Hz, needed for effective P/S discrimination of explosions from earthquakes, the new bandwidth criteria sufficiently fix the instabilities and errors so that the residuals and calibration terms are useful for application.« less

  3. Significant improvements in stability and reproducibility of atomic-scale atomic force microscopy in liquid

    NASA Astrophysics Data System (ADS)

    Akrami, S. M. R.; Nakayachi, H.; Watanabe-Nakayama, T.; Asakawa, H.; Fukuma, T.

    2014-11-01

    Recent advancement of dynamic-mode atomic force microscopy (AFM) for liquid-environment applications enabled atomic-scale studies on various interfacial phenomena. However, instabilities and poor reproducibility of the measurements often prevent systematic studies. To solve this problem, we have investigated the effect of various tip treatment methods for atomic-scale imaging and force measurements in liquid. The tested methods include Si coating, Ar plasma, Ar sputtering and UV/O3 cleaning. We found that all the methods provide significant improvements in both the imaging and force measurements in spite of the tip transfer through the air. Among the methods, we found that the Si coating provides the best stability and reproducibility in the measurements. To understand the origin of the fouling resistance of the cleaned tip surface and the difference between the cleaning methods, we have investigated the tip surface properties by x-ray photoelectron spectroscopy and contact angle measurements. The results show that the contaminations adsorbed on the tip during the tip transfer through the air should desorb from the surface when it is immersed in aqueous solution due to the enhanced hydrophilicity by the tip treatments. The tip surface prepared by the Si coating is oxidized when it is immersed in aqueous solution. This creates local spots where stable hydration structures are formed. For the other methods, there is no active mechanism to create such local hydration sites. Thus, the hydration structure formed under the tip apex is not necessarily stable. These results reveal the desirable tip properties for atomic-scale AFM measurements in liquid, which should serve as a guideline for further improvements of the tip treatment methods.

  4. Flavonol-rich dark cocoa significantly decreases plasma endothelin-1 and improves cognition in urban children.

    PubMed

    Calderón-Garcidueñas, Lilian; Mora-Tiscareño, Antonieta; Franco-Lira, Maricela; Cross, Janet V; Engle, Randall; Aragón-Flores, Mariana; Gómez-Garza, Gilberto; Jewells, Valerie; Medina-Cortina, Humberto; Solorio, Edelmira; Chao, Chih-Kai; Zhu, Hongtu; Mukherjee, Partha S; Ferreira-Azevedo, Lara; Torres-Jardón, Ricardo; D'Angiulli, Amedeo

    2013-01-01

    Air pollution exposures are linked to systemic inflammation, cardiovascular and respiratory morbidity and mortality, neuroinflammation and neuropathology in young urbanites. In particular, most Mexico City Metropolitan Area (MCMA) children exhibit subtle cognitive deficits, and neuropathology studies show 40% of them exhibiting frontal tau hyperphosphorylation and 51% amyloid-β diffuse plaques (compared to 0% in low pollution control children). We assessed whether a short cocoa intervention can be effective in decreasing plasma endothelin 1 (ET-1) and/or inflammatory mediators in MCMA children. Thirty gram of dark cocoa with 680 mg of total flavonols were given daily for 10.11 ± 3.4 days (range 9-24 days) to 18 children (10.55 years, SD = 1.45; 11F/7M). Key metabolite ratios in frontal white matter and in hippocampus pre and during cocoa intervention were quantified by magnetic resonance spectroscopy. ET-1 significantly decreased after cocoa treatment (p = 0.0002). Fifteen children (83%) showed a marginally significant individual improvement in one or both of the applied simple short memory tasks. Endothelial dysfunction is a key feature of exposure to particulate matter (PM) and decreased endothelin-1 bioavailability is likely useful for brain function in the context of air pollution. Our findings suggest that cocoa interventions may be critical for early implementation of neuroprotection of highly exposed urban children. Multi-domain nutraceutical interventions could limit the risk for endothelial dysfunction, cerebral hypoperfusion, neuroinflammation, cognitive deficits, structural volumetric detrimental brain effects, and the early development of the neuropathological hallmarks of Alzheimer's and Parkinson's diseases.

  5. Flavonol-rich dark cocoa significantly decreases plasma endothelin-1 and improves cognition in urban children

    PubMed Central

    Calderón-Garcidueñas, Lilian; Mora-Tiscareño, Antonieta; Franco-Lira, Maricela; Cross, Janet V.; Engle, Randall; Aragón-Flores, Mariana; Gómez-Garza, Gilberto; Jewells, Valerie; Weili, Lin; Medina-Cortina, Humberto; Solorio, Edelmira; Chao, Chih-kai; Zhu, Hongtu; Mukherjee, Partha S.; Ferreira-Azevedo, Lara; Torres-Jardón, Ricardo; D'Angiulli, Amedeo

    2013-01-01

    Air pollution exposures are linked to systemic inflammation, cardiovascular and respiratory morbidity and mortality, neuroinflammation and neuropathology in young urbanites. In particular, most Mexico City Metropolitan Area (MCMA) children exhibit subtle cognitive deficits, and neuropathology studies show 40% of them exhibiting frontal tau hyperphosphorylation and 51% amyloid-β diffuse plaques (compared to 0% in low pollution control children). We assessed whether a short cocoa intervention can be effective in decreasing plasma endothelin 1 (ET-1) and/or inflammatory mediators in MCMA children. Thirty gram of dark cocoa with 680 mg of total flavonols were given daily for 10.11 ± 3.4 days (range 9–24 days) to 18 children (10.55 years, SD = 1.45; 11F/7M). Key metabolite ratios in frontal white matter and in hippocampus pre and during cocoa intervention were quantified by magnetic resonance spectroscopy. ET-1 significantly decreased after cocoa treatment (p = 0.0002). Fifteen children (83%) showed a marginally significant individual improvement in one or both of the applied simple short memory tasks. Endothelial dysfunction is a key feature of exposure to particulate matter (PM) and decreased endothelin-1 bioavailability is likely useful for brain function in the context of air pollution. Our findings suggest that cocoa interventions may be critical for early implementation of neuroprotection of highly exposed urban children. Multi-domain nutraceutical interventions could limit the risk for endothelial dysfunction, cerebral hypoperfusion, neuroinflammation, cognitive deficits, structural volumetric detrimental brain effects, and the early development of the neuropathological hallmarks of Alzheimer's and Parkinson's diseases. PMID:23986703

  6. Significantly improving trace thallium removal from surface waters during coagulation enhanced by nanosized manganese dioxide.

    PubMed

    Huangfu, Xiaoliu; Ma, Chengxue; Ma, Jun; He, Qiang; Yang, Chun; Jiang, Jin; Wang, Yaan; Wu, Zhengsong

    2017-02-01

    Thallium (Tl) is an element of high toxicity and significant accumulation in human body. There is an urgent need for the development of appropriate strategies for trace Tl removal in drinking water treatment plants. In this study, the efficiency and mechanism of trace Tl (0.5 μg/L) removal by conventional coagulation enhanced by nanosized manganese dioxide (nMnO2) were explored in simulated water and two representative surface waters (a river water and a reservoir water obtained from Northeast China). Experimental results showed that nMnO2 significantly improve Tl(I) removal from selected waters. The removal efficiency was dramatically higher in the simulated water, demonstrating by less than 0.1 μg/L Tl residual. The enhancement of trace Tl removal in the surface waters decreased to a certain extent. Both adjusting water pH to alkaline condition and preoxidation of Tl(I) to Tl(III) benefit trace Tl removal from surface waters. Data also indicated that competitive cation of Ca(2+) decreased the efficiency of trace Tl removal, resulting from the reduction of Tl adsorption on nMnO2. Humic acid could largely low Tl removal efficiency during nMnO2 enhanced coagulation processes. Trace elemental Tl firstly adsorbed on nMnO2 and then removed accompanying with nMnO2 settling. The information obtained in the present study may provide a potential strategy for drinking water treatment plants threatened by trace Tl.

  7. Codon Optimization Significantly Improves the Expression Level of a Keratinase Gene in Pichia pastoris

    PubMed Central

    Hu, Hong; Gao, Jie; He, Jun; Yu, Bing; Zheng, Ping; Huang, Zhiqing; Mao, Xiangbing; Yu, Jie; Han, Guoquan; Chen, Daiwen

    2013-01-01

    The main keratinase (kerA) gene from the Bacillus licheniformis S90 was optimized by two codon optimization strategies and expressed in Pichia pastoris in order to improve the enzyme production compared to the preparations with the native kerA gene. The results showed that the corresponding mutations (synonymous codons) according to the codon bias in Pichia pastoris were successfully introduced into keratinase gene. The highest keratinase activity produced by P. pastoris pPICZαA-kerAwt, pPICZαA-kerAopti1 and pPICZαA-kerAopti2 was 195 U/ml, 324 U/ml and 293 U/ml respectively. In addition, there was no significant difference in biomass concentration, target gene copy numbers and relative mRNA expression levels of every positive strain. The molecular weight of keratinase secreted by recombinant P. pastori was approx. 39 kDa. It was optimally active at pH 7.5 and 50°C. The recombinant keratinase could efficiently degrade both α-keratin (keratin azure) and β-keratin (chicken feather meal). These properties make the P. pastoris pPICZαA-kerAopti1 a suitable candidate for industrial production of keratinases. PMID:23472192

  8. An extended bioreaction database that significantly improves reconstruction and analysis of genome-scale metabolic networks.

    PubMed

    Stelzer, Michael; Sun, Jibin; Kamphans, Tom; Fekete, Sándor P; Zeng, An-Ping

    2011-11-01

    The bioreaction database established by Ma and Zeng (Bioinformatics, 2003, 19, 270-277) for in silico reconstruction of genome-scale metabolic networks has been widely used. Based on more recent information in the reference databases KEGG LIGAND and Brenda, we upgrade the bioreaction database in this work by almost doubling the number of reactions from 3565 to 6851. Over 70% of the reactions have been manually updated/revised in terms of reversibility, reactant pairs, currency metabolites and error correction. For the first time, 41 spontaneous sugar mutarotation reactions are introduced into the biochemical database. The upgrade significantly improves the reconstruction of genome scale metabolic networks. Many gaps or missing biochemical links can be recovered, as exemplified with three model organisms Homo sapiens, Aspergillus niger, and Escherichia coli. The topological parameters of the constructed networks were also largely affected, however, the overall network structure remains scale-free. Furthermore, we consider the problem of computing biologically feasible shortest paths in reconstructed metabolic networks. We show that these paths are hard to compute and present solutions to find such paths in networks of small and medium size.

  9. Mn-doped TiO2 thin films with significantly improved optical and electrical properties

    NASA Astrophysics Data System (ADS)

    Lu, Liu; Xia, Xiaohong; Luo, J. K.; Shao, G.

    2012-12-01

    TiO2 thin films with various Mn doping contents were fabricated by reactive magnetron sputtering deposition at 550 °C and their structural, optical and electrical properties were characterized. All films were made of densely packed columnar grains with a fibrous texture along the normal direction of the substrate. The as-deposited structure in the pure TiO2 film consisted of anatase grains with the [1 0 1] texture. Mn incorporation stabilized the rutile phase and induced lattice contraction in the [1 0 0] direction. The texture in the Mn-doped films changed from [1 1 0] to [2 0 0] with increasing Mn content. The incorporation of Mn in the TiO2 lattice introduced intermediate bands into its narrowed forbidden gap, leading to remarkable red-shifts in the optical absorption edges, together with significantly improved electrical conductivity of the thin films. Hall measurement showed that the incorporation of Mn-induced p-type conductivity, with hole mobility in heavily doped TiO2 (˜40% Mn) being about an order higher than electron mobility in single-crystal rutile TiO2. Oxygen vacancies, on the other hand, interacted with substitutional Mn atoms to reduce its effect on optical and electrical properties.

  10. Activation of Big Grain1 significantly improves grain size by regulating auxin transport in rice.

    PubMed

    Liu, Linchuan; Tong, Hongning; Xiao, Yunhua; Che, Ronghui; Xu, Fan; Hu, Bin; Liang, Chengzhen; Chu, Jinfang; Li, Jiayang; Chu, Chengcai

    2015-09-01

    Grain size is one of the key factors determining grain yield. However, it remains largely unknown how grain size is regulated by developmental signals. Here, we report the identification and characterization of a dominant mutant big grain1 (Bg1-D) that shows an extra-large grain phenotype from our rice T-DNA insertion population. Overexpression of BG1 leads to significantly increased grain size, and the severe lines exhibit obviously perturbed gravitropism. In addition, the mutant has increased sensitivities to both auxin and N-1-naphthylphthalamic acid, an auxin transport inhibitor, whereas knockdown of BG1 results in decreased sensitivities and smaller grains. Moreover, BG1 is specifically induced by auxin treatment, preferentially expresses in the vascular tissue of culms and young panicles, and encodes a novel membrane-localized protein, strongly suggesting its role in regulating auxin transport. Consistent with this finding, the mutant has increased auxin basipetal transport and altered auxin distribution, whereas the knockdown plants have decreased auxin transport. Manipulation of BG1 in both rice and Arabidopsis can enhance plant biomass, seed weight, and yield. Taking these data together, we identify a novel positive regulator of auxin response and transport in a crop plant and demonstrate its role in regulating grain size, thus illuminating a new strategy to improve plant productivity.

  11. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications

    PubMed Central

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation. PMID:26958441

  12. Truss topology optimization using an improved species-conserving genetic algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jian-Ping

    2015-01-01

    The aim of this article is to apply and improve the species-conserving genetic algorithm (SCGA) to search multiple solutions of truss topology optimization problems in a single run. A species is defined as a group of individuals with similar characteristics and is dominated by its species seed. The solutions of an optimization problem will be selected from the found species. To improve the accuracy of solutions, a species mutation technique is introduced to improve the fitness of the found species seeds and the combination of a neighbour mutation and a uniform mutation is applied to balance exploitation and exploration. A real vector is used to represent the corresponding cross-sectional areas and a member is thought to be existent if its area is bigger than a critical area. A finite element analysis model was developed to deal with more practical considerations in modelling, such as the existence of members, kinematic stability analysis, and computation of stresses and displacements. Cross-sectional areas and node connections are decision variables and optimized simultaneously to minimize the total weight of trusses. Numerical results demonstrate that some truss topology optimization examples have many global and local solutions, different topologies can be found using the proposed algorithm on a single run and some trusses have smaller weights than the solutions in the literature.

  13. A combined approach to cartographic displacement for buildings based on skeleton and improved elastic beam algorithm.

    PubMed

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm.

  14. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    PubMed Central

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  15. Prediction of protein function improving sequence remote alignment search by a fuzzy logic algorithm.

    PubMed

    Gómez, Antonio; Cedano, Juan; Espadaler, Jordi; Hermoso, Antonio; Piñol, Jaume; Querol, Enrique

    2008-02-01

    The functional annotation of the new protein sequences represents a major drawback for genomic science. The best way to suggest the function of a protein from its sequence is by finding a related one for which biological information is available. Current alignment algorithms display a list of protein sequence stretches presenting significant similarity to different protein targets, ordered by their respective mathematical scores. However, statistical and biological significance do not always coincide, therefore, the rearrangement of the program output according to more biological characteristics than the mathematical scoring would help functional annotation. A new method that predicts the putative function for the protein integrating the results from the PSI-BLAST program and a fuzzy logic algorithm is described. Several protein sequence characteristics have been checked in their ability to rearrange a PSI-BLAST profile according more to their biological functions. Four of them: amino acid content, matched segment length and hydropathic and flexibility profiles positively contributed, upon being integrated by a fuzzy logic algorithm into a program, BYPASS, to the accurate prediction of the function of a protein from its sequence.

  16. Improved MODIS Dark Target aerosol optical depth algorithm over land: angular effect correction

    NASA Astrophysics Data System (ADS)

    Wu, Yerong; de Graaf, Martin; Menenti, Massimo

    2016-11-01

    Aerosol optical depth (AOD) product retrieved from MODerate Resolution Imaging Spectroradiometer (MODIS) measurements has greatly benefited scientific research in climate change and air quality due to its high quality and large coverage over the globe. However, the current product (e.g., Collection 6) over land needs to be further improved. The is because AOD retrieval still suffers large uncertainty from the surface reflectance (e.g., anisotropic reflection) although the impacts of the surface reflectance have been largely reduced using the Dark Target (DT) algorithm. It has been shown that the AOD retrieval over dark surface can be improved by considering surface bidirectional distribution reflectance function (BRDF) effects in previous study. However, the relationship of the surface reflectance between visible and shortwave infrared band that applied in the previous study can lead to an angular dependence of the AOD retrieval. This has at least two reasons. The relationship based on the assumption of isotropic reflection or Lambertian surface is not suitable for the surface bidirectional reflectance factor (BRF). However, although the relationship varies with the surface cover type by considering the vegetation index NDVISWIR, this index itself has a directional effect and affects the estimation of the surface reflection, and it can lead to some errors in the AOD retrieval. To improve this situation, we derived a new relationship for the spectral surface BRF in this study, using 3 years of data from AERONET-based Surface Reflectance Validation Network (ASRVN). To test the performance of the new algorithm, two case studies were used: 2 years of data from North America and 4 months of data from the global land. The results show that the angular effects of the AOD retrieval are largely reduced in most cases, including fewer occurrences of negative retrievals. Particularly, for the global land case, the AOD retrieval was improved by the new algorithm compared to the

  17. Optimization of Printed Antennas Using Genetic Algorithm Coupled with Improved Cavity Model

    NASA Astrophysics Data System (ADS)

    Sathi, Vahid; Ehteshami, Nasrin; Ghobadi, C.

    2012-06-01

    An accurate electromagnetic optimization tool for designing rectangular and circular microstrip antennas is proposed. This optimization method is based on the improved cavity model analysis in conjunction with the well-known genetic algorithm, which is employed to optimize the dimensions and feed point location of rectangular and circular microstrip antennas. Results obtained by this technique agree quite well with the measured data and the data obtained by the FEM based software HFSS by ANSOFT. This technique can be fruitfully used in microwave CAD applications.

  18. Scheduling Algorithm for Improving Lift (SAIL): Documentation for initial operating capability

    SciTech Connect

    Hawthorne, J.E.; McLaren, R.A.

    1990-04-01

    The Military Sealift Command, a component of the United States Transportation Command, is responsible for the sealift of military personnel and material during a crisis. Conceptual plans for these complex moves, called deliberate plans, are continually being prepared. A computer-based scheduling system, the Sealift Strategic Analysis Subsystem (SEASTRAT), is under development for assisting in the production of these plans. The ship scheduling portion of this system, the Scheduling Algorithm for Improving Lift (SAIL), combines linear optimization and heuristic methods to determine ship routes and cargo loadings which honor a variety of complex operational constraints.

  19. Improved retrieval of complex supercontinuum pulses from XFROG traces using a ptychographic algorithm.

    PubMed

    Heidt, Alexander M; Spangenberg, Dirk-Mathys; Brügmann, Michael; Rohwer, Erich G; Feurer, Thomas

    2016-11-01

    We demonstrate that time-domain ptychography, a recently introduced iterative ultrafast pulse retrieval algorithm, has properties well suited for the reconstruction of complex light pulses with large time-bandwidth products from a cross-correlation frequency-resolved optical gating (XFROG) measurement. It achieves temporal resolution on the scale of a single optical cycle using long probe pulses and low sampling rates. In comparison to existing algorithms, ptychography minimizes the data to be recorded and processed, and significantly reduces the computational time of the reconstruction. Experimentally, we measure the temporal waveform of an octave-spanning, 3.5 ps long, supercontinuum pulse generated in photonic crystal fiber, resolving features as short as 5.7 fs with sub-fs resolution and 30 dB dynamic range using 100 fs probe pulses and similarly large delay steps.

  20. Evaluating some computer enhancement algorithms that improve the visibility of cometary morphology

    NASA Technical Reports Server (NTRS)

    Larson, S. M.; Slaughter, C. D.

    1991-01-01

    The observed morphology of cometary comae is determined by ejection circumstances and the interaction of the ejected material with the local environment. Anisotropic emission can provide useful information on such things as orientation of the nucleus, location of active areas on the nucleus, and the formation of ion structure near the nucleus. However, discrete coma features are usually diffuse, of low amplitude, and superimposed on a steep intensity gradient radial to the nucleus. To improve the visibility of these features, a variety of digital enhancement algorithms were employed with varying degrees of success. They usually produce some degree of spatial filtering, and are chosen to optimize visibility of certain detail. Since information in the image is altered, it is important to understand the effects of parameter selection and processing artifacts can have on subsequent interpretation. Using the criteria that the ideal algorithm must enhance low contrast features while not introducing misleading artifacts (or features that cannot be seen in the stretched, unprocessed image), the suitability of various algorithms that aid cometary studies were assessed. The strong and weak points of each are identified in the context of maintaining positional integrity of features at the expense of photometric information.

  1. An improved phase shift reconstruction algorithm of fringe scanning technique for X-ray microscopy

    SciTech Connect

    Lian, S.; Yang, H.; Kudo, H.; Momose, A.; Yashiro, W.

    2015-02-15

    The X-ray phase imaging method has been applied to observe soft biological tissues, and it is possible to image the soft tissues by using the benefit of the so-called “Talbot effect” by an X-ray grating. One type of the X-ray phase imaging method was reported by combining an X-ray imaging microscope equipped by a Fresnel zone plate with a phase grating. Using the fringe scanning technique, a high-precision phase shift image could be obtained by displacing the grating step by step and measuring dozens of sample images. The number of the images was selected to reduce the error caused by the non-sinusoidal component of the Talbot self-image at the imaging plane. A larger number suppressed the error more but increased radiation exposure and required higher mechanical stability of equipment. In this paper, we analyze the approximation error of fringe scanning technique for the X-ray microscopy which uses just one grating and proposes an improved algorithm. We compute the approximation error by iteration and substitute that into the process of reconstruction of phase shift. This procedure will suppress the error even with few sample images. The results of simulation experiments show that the precision of phase shift image reconstructed by the proposed algorithm with 4 sample images is almost the same as that reconstructed by the conventional algorithm with 40 sample images. We also have succeeded in the experiment with real data.

  2. Effective Application of Improved Profit-Mining Algorithm for the Interday Trading Model

    PubMed Central

    Wu, Jungpin

    2014-01-01

    Many real world applications of association rule mining from large databases help users make better decisions. However, they do not work well in financial markets at this time. In addition to a high profit, an investor also looks for a low risk trading with a better rate of winning. The traditional approach of using minimum confidence and support thresholds needs to be changed. Based on an interday model of trading, we proposed effective profit-mining algorithms which provide investors with profit rules including information about profit, risk, and winning rate. Since profit-mining in the financial market is still in its infant stage, it is important to detail the inner working of mining algorithms and illustrate the best way to apply them. In this paper we go into details of our improved profit-mining algorithm and showcase effective applications with experiments using real world trading data. The results show that our approach is practical and effective with good performance for various datasets. PMID:24688442

  3. Effective application of improved profit-mining algorithm for the interday trading model.

    PubMed

    Hsieh, Yu-Lung; Yang, Don-Lin; Wu, Jungpin

    2014-01-01

    Many real world applications of association rule mining from large databases help users make better decisions. However, they do not work well in financial markets at this time. In addition to a high profit, an investor also looks for a low risk trading with a better rate of winning. The traditional approach of using minimum confidence and support thresholds needs to be changed. Based on an interday model of trading, we proposed effective profit-mining algorithms which provide investors with profit rules including information about profit, risk, and winning rate. Since profit-mining in the financial market is still in its infant stage, it is important to detail the inner working of mining algorithms and illustrate the best way to apply them. In this paper we go into details of our improved profit-mining algorithm and showcase effective applications with experiments using real world trading data. The results show that our approach is practical and effective with good performance for various datasets.

  4. An improved segmentation algorithm to detect moving object in video sequences

    NASA Astrophysics Data System (ADS)

    Li, Jinkui; Sang, Xinzhu; Wang, Yongqiang; Yan, Binbin; Yu, Chongxiu

    2010-11-01

    The segmentation of moving object in video sequences is attracting more and more attention because of its important role in various camera video applications, such as video surveillance, traffic monitoring, people tracking. and so on. Conventional segmentation algorithms can be divided into two classes. One class is based on spatial homogeneity, which results in the promising output. However, the computation is too complex and heavy to be unsuitable to real-time applications. The other class utilizes change detection as the segmentation standard to extract the moving object. Typical approaches include frame difference, background subtraction and optical flow. A novel algorithm based on adaptive symmetrical difference and background subtraction is proposed. Firstly, the moving object mask is detected through the adaptive symmetrical difference, and the contour of the mask is extracted. And then, the adaptive background subtraction is carried out in the acquired region to extract the accurate moving object. Morphological operation and shadow cancellation are adopted to refine the result. Experimental results show that the algorithm is robust and effective in improving the segmentation accuracy.

  5. MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Hao; Li, Na; Xu, Shiyou; Chen, Zengping

    2014-10-01

    Migration through resolution cells (MTRC) is generated in high-resolution inverse synthetic aperture radar (ISAR) imaging. A MTRC compensation algorithm for high-resolution ISAR imaging based on improved polar format algorithm (PFA) is proposed in this paper. Firstly, in the situation that a rigid-body target stably flies, the initial value of the rotation angle and center of the target is obtained from the rotation of radar line of sight (RLOS) and high range resolution profile (HRRP). Then, the PFA is iteratively applied to the echo data to search the optimization solution based on minimum entropy criterion. The procedure starts with the estimated initial rotation angle and center, and terminated when the entropy of the compensated ISAR image is minimized. To reduce the computational load, the 2-D iterative search is divided into two 1-D search. One is carried along the rotation angle and the other one is carried along rotation center. Each of the 1-D searches is realized by using of the golden section search method. The accurate rotation angle and center can be obtained when the iterative search terminates. Finally, apply the PFA to compensate the MTRC by the use of the obtained optimized rotation angle and center. After MTRC compensation, the ISAR image can be best focused. Simulated and real data demonstrate the effectiveness and robustness of the proposed algorithm.

  6. An improved algorithm to remove cosmic spikes in Raman spectra for online monitoring.

    PubMed

    Li, Sheng; Dai, Liankui

    2011-11-01

    Raman spectral analysis integrated with multivariate calibration is a fast and effective solution to monitor chemical product properties. However, Raman instruments utilizing charge-coupled device (CCD) detectors suffer from occasional spikes caused by cosmic rays. Cosmic spikes can disturb or even destroy the meaningful chemical information expressed by normal Raman spectra. In online monitoring, some cosmic spikes have intensity and bandwidth similar to normal Raman peaks of chemical components when a low resolution and cost-effective Raman instrument is used. Moreover, the online Raman spectra always contain variations of strong Raman peaks and fluorescence. Current spike-removal methods seem to have difficulty detecting and recovering cosmic spikes in these online Raman spectra. Therefore, an improved algorithm is proposed. In this algorithm, a new scheme composed of intensity identification and local moving window correlation analysis is introduced for cosmic spike detection; intensity identification based on derivative spectra and local linear fitting approximation are used for the recovery of cosmic spikes. The algorithm is proved to be simple and effective and has been applied in an online Raman instrument installed at a continuous catalytic reforming unit in a refinery.

  7. Forward-Masked Frequency Selectivity Improvements in Simulated and Actual Cochlear Implant Users Using a Preprocessing Algorithm

    PubMed Central

    Jürgens, Tim

    2016-01-01

    Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users’ speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception. PMID:27604785

  8. Evaluation of an improved algorithm for producing realistic 3D breast software phantoms: Application for mammography

    PubMed Central

    Bliznakova, K.; Suryanarayanan, S.; Karellas, A.; Pallikarakis, N.

    2010-01-01

    Purpose: This work presents an improved algorithm for the generation of 3D breast software phantoms and its evaluation for mammography. Methods: The improved methodology has evolved from a previously presented 3D noncompressed breast modeling method used for the creation of breast models of different size, shape, and composition. The breast phantom is composed of breast surface, duct system and terminal ductal lobular units, Cooper’s ligaments, lymphatic and blood vessel systems, pectoral muscle, skin, 3D mammographic background texture, and breast abnormalities. The key improvement is the development of a new algorithm for 3D mammographic texture generation. Simulated images of the enhanced 3D breast model without lesions were produced by simulating mammographic image acquisition and were evaluated subjectively and quantitatively. For evaluation purposes, a database with regions of interest taken from simulated and real mammograms was created. Four experienced radiologists participated in a visual subjective evaluation trial, as they judged the quality of the simulated mammograms, using the new algorithm compared to mammograms, obtained with the old modeling approach. In addition, extensive quantitative evaluation included power spectral analysis and calculation of fractal dimension, skewness, and kurtosis of simulated and real mammograms from the database. Results: The results from the subjective evaluation strongly suggest that the new methodology for mammographic breast texture creates improved breast models compared to the old approach. Calculated parameters on simulated images such as β exponent deducted from the power law spectral analysis and fractal dimension are similar to those calculated on real mammograms. The results for the kurtosis and skewness are also in good coincidence with those calculated from clinical images. Comparison with similar calculations published in the literature showed good agreement in the majority of cases. Conclusions: The

  9. Evaluation of an improved algorithm for producing realistic 3D breast software phantoms: Application for mammography

    SciTech Connect

    Bliznakova, K.; Suryanarayanan, S.; Karellas, A.; Pallikarakis, N.

    2010-11-15

    Purpose: This work presents an improved algorithm for the generation of 3D breast software phantoms and its evaluation for mammography. Methods: The improved methodology has evolved from a previously presented 3D noncompressed breast modeling method used for the creation of breast models of different size, shape, and composition. The breast phantom is composed of breast surface, duct system and terminal ductal lobular units, Cooper's ligaments, lymphatic and blood vessel systems, pectoral muscle, skin, 3D mammographic background texture, and breast abnormalities. The key improvement is the development of a new algorithm for 3D mammographic texture generation. Simulated images of the enhanced 3D breast model without lesions were produced by simulating mammographic image acquisition and were evaluated subjectively and quantitatively. For evaluation purposes, a database with regions of interest taken from simulated and real mammograms was created. Four experienced radiologists participated in a visual subjective evaluation trial, as they judged the quality of the simulated mammograms, using the new algorithm compared to mammograms, obtained with the old modeling approach. In addition, extensive quantitative evaluation included power spectral analysis and calculation of fractal dimension, skewness, and kurtosis of simulated and real mammograms from the database. Results: The results from the subjective evaluation strongly suggest that the new methodology for mammographic breast texture creates improved breast models compared to the old approach. Calculated parameters on simulated images such as {beta} exponent deducted from the power law spectral analysis and fractal dimension are similar to those calculated on real mammograms. The results for the kurtosis and skewness are also in good coincidence with those calculated from clinical images. Comparison with similar calculations published in the literature showed good agreement in the majority of cases. Conclusions: The

  10. 3D resistivity inversion using an improved Genetic Algorithm based on control method of mutation direction

    NASA Astrophysics Data System (ADS)

    Liu, B.; Li, S. C.; Nie, L. C.; Wang, J.; L, X.; Zhang, Q. S.

    2012-12-01

    Traditional inversion method is the most commonly used procedure for three-dimensional (3D) resistivity inversion, which usually takes the linearization of the problem and accomplish it by iterations. However, its accuracy is often dependent on the initial model, which can make the inversion trapped in local optima, even cause a bad result. Non-linear method is a feasible way to eliminate the dependence on the initial model. However, for large problems such as 3D resistivity inversion with inversion parameters exceeding a thousand, main challenges of non-linear method are premature and quite low search efficiency. To deal with these problems, we present an improved Genetic Algorithm (GA) method. In the improved GA method, smooth constraint and inequality constraint are both applied on the object function, by which the degree of non-uniqueness and ill-conditioning is decreased. Some measures are adopted from others by reference to maintain the diversity and stability of GA, e.g. real-coded method, and the adaptive adjustment of crossover and mutation probabilities. Then a generation method of approximately uniform initial population is proposed in this paper, with which uniformly distributed initial generation can be produced and the dependence on initial model can be eliminated. Further, a mutation direction control method is presented based on the joint algorithm, in which the linearization method is embedded in GA. The update vector produced by linearization method is used as mutation increment to maintain a better search direction compared with the traditional GA with non-controlled mutation operation. By this method, the mutation direction is optimized and the search efficiency is improved greatly. The performance of improved GA is evaluated by comparing with traditional inversion results in synthetic example or with drilling columnar sections in practical example. The synthetic and practical examples illustrate that with the improved GA method we can eliminate

  11. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  12. A new algorithm to improve assessment of cortical bone geometry in pQCT.

    PubMed

    Cervinka, Tomas; Sievänen, Harri; Lala, Deena; Cheung, Angela M; Giangregorio, Lora; Hyttinen, Jari

    2015-12-01

    High-resolution peripheral quantitative computed tomography (HR-pQCT) is now considered the leading imaging modality in bone research. However, access to HR-pQCT is limited and image acquisition is mainly constrained only for the distal third of appendicular bones. Hence, the conventional pQCT is still commonly used despite inaccurate threshold-based segmentation of cortical bone that can compromise the assessment of whole bone strength. Therefore, this study addressed whether the use of an advanced image processing algorithm, called OBS, can enhance the cortical bone analysis in pQCT images and provide similar information to HR-pQCT when the same volumes of interest are analyzed. Using pQCT images of European Forearm Phantom (EFP), and pQCT and HR-pQCT images of the distal tibia from 15 cadavers, we compared the results from the OBS algorithm with those obtained from common pQCT analyses, HR-pQCT manual analysis (considered as a gold standard) and common HR-pQCT analysis dual threshold technique.We found that the use of OBS segmentation method for pQCT image analysis of EFP data did not result in any improvement but reached similar performance in cortical bone delineation as did HR-pQCT image analyses. The assessments of cortical cross-sectional bone area and thickness by OBS algorithm were overestimated by less than 4% while area moments of inertia were overestimated by ~5–10%, depending on reference HR-pQCT analysis method. In conclusion, this study showed that the OBS algorithm performed reasonably well and it offers a promising practical tool to enhance the assessment of cortical bone geometry in pQCT.

  13. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage

    PubMed Central

    Lee, Kyuman; Baek, Hoki; Lim, Jaesung

    2016-01-01

    The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user

  14. A small change in the design of a slit bioaerosol impactor significantly improves its collection characteristics.

    PubMed

    Grinshpun, Sergey A; Adhikari, Atin; Cho, Seung-Hyun; Kim, Ki-Youn; Lee, Taekhee; Reponen, Tiina

    2007-08-01

    While several methods are available for bioaerosol monitoring, impaction remains the most common one, particularly for collecting fungal spores. Earlier studies have shown that the collection efficiency of many conventional single-stage bioaerosol impactors falls below 50% for spores with an aerodynamic diameter between 1.7 and 2.5 microm because their cut-off size is 2.5 microm or greater. The cut-off size reduction is primarily done by substantially increasing the sampling flow rate or decreasing the impaction jet size, W, to a fraction of a millimetre, with both measures often impractical to implement. Some success has recently been reported on the utilization of an ultra-low jet-to-plate distance, S (S/W < 0.1), in circular impactors. This paper describes a laboratory evaluation and some field testing of two single-stage, single-nozzle, slit bioaerosol impactors, Allergenco-D and Air-O-Cell, which feature the same jet dimensions and flow rate but have some design configuration differences that were initially thought to be of low significance. The collection efficiency and the spore deposit characteristics were determined in the laboratory using real-time aerosol spectrometry and different microscopic enumeration methods as the test impactors were challenged with the non-biological polydisperse NaCl aerosol and the aerosolized fungal spores of Cladosporium cladosporioides, Aspergillus versicolor, and Penicillium melinii. The tests showed that a relatively small reduction in the jet-to-plate distance of a single-stage, single-nozzle impactor with a tapered inlet nozzle, combined with adding a straight section of sufficient length, can significantly decrease the cut-off size to the level that is sufficient to efficiently collect spores of all fungal species. Furthermore, it appears that the slit jet design may improve the application of partial spore counting methodologies with respect to those applied to circular deposits. Data from a demonstration field study

  15. Globular protein-coated Paclitaxel nanosuspensions: interaction mechanism, direct cytosolic delivery, and significant improvement in pharmacokinetics.

    PubMed

    Li, Yongji; Wu, Zhannan; He, Wei; Qin, Chao; Yao, Jing; Zhou, Jianping; Yin, Lifang

    2015-05-04

    About 40% of the marketed drugs and 70-90% of new drug candidates are insoluble in water and therefore poorly bioavailable, which significantly compromises their therapeutic effects. A formulation of nanosuspensions achieved by reducing the pure drug particle size down to seb-micron range is one of the most promising approaches to overcome the insolubility. However, the nanosuspension formulations are subject to instability because of nucleation and particle growth. Therefore, a stabilizer is needed to be incorporated into the nanosuspension formulation during the preparation process to suppress the aggregation of drug particles. β-LG, a globular protein, is broken by heat-induced denaturation, and its hydrophobic area is exposed, which allows it to associate with organic particles. PTX, an insoluble drug, is widely used for the clinical treatment of human cancer. However, this drug's clinical application is greatly limited by intrinsic defects including poor solubility, adverse side effects, and poor tumor penetration. In this study, we prepared β-LG-stabilized PTX nanosuspensions (PTX-NS) by coating the protein onto nanoscaled drug particles, investigating the stabilization effect of β-LG on PTX-NS, and evaluating its in vitro and in vivo performance. PTX-NS with a diameter of approximately 200 nm was easily prepared. β-LG produced significantly stabilized effect on PTX-NS via the interaction between the hydrophobic area of the protein and the hydrophobic surface of the drug particles, which resulted in a conformational change of the protein, the loss of both secondary and tertiary structures, and the transition of Trp residues to a less hydrophobic condition. Importantly, unlike other conventional nanoparticles, PTX-NS could directly translocated across the membrane into the cytosol in an energy-independent manner, without entrapment within the endosomal-lysosomal system. Moreover, compared with Taxol, PTX-NS increased AUC and Cmax by 26- and 16-fold

  16. Using gas modifiers to significantly improve sensitivity and selectivity in a cylindrical FAIMS device.

    PubMed

    Purves, Randy W; Ozog, Allison R; Ambrose, Stephen J; Prasad, Satendra; Belford, Michael; Dunyach, Jean-Jacques

    2014-07-01

    Recent reports describing enhanced performance when using gas additives in a DMS device (planar electrodes) have indicated that comparable benefits are not attainable using FAIMS (cylindrical electrodes), owing to the non-homogeneous electric fields within the analyzer region. In this study, a FAIMS system (having cylindrical electrodes) was modified to allow for controlled delivery of gas additives. An experiment was carried out that illustrates the important distinction between gas modifiers present as unregulated contaminants and modifiers added in a controlled manner. The effect of contamination was simulated by adjusting the ESI needle position to promote incomplete desolvation, thereby permitting ESI solvent vapor into the FAIMS analyzer region, causing signal instability and irreproducible CV values. However, by actively controlling the delivery of the gas modifier, reproducible CV spectra were obtained. The effects of adding different gas modifiers were examined using 15 positive ions having mass-to-charge (m/z) values between 90 and 734. Significant improvements in peak capacity and increases in ion transmission were readily attained by adding acetonitrile vapor, even at trace levels (≤0.1%). Increases in signal intensity were greatest for the low m/z ions; for the six lowest molecular weight species, signal intensities increased by ∼10- to over 100-fold compared with using nitrogen without gas additives, resulting in equivalent or better signal intensities compared with ESI without FAIMS. These results confirm that analytical benefits derived from the addition of gas modifiers reported with a uniform electric field (DMS) also are observed using a non-homogenous electric field (FAIMS) in the analyser region.

  17. Significant improvement of Serratia marcescens lipase fermentation, by optimizing medium, induction, and oxygen supply.

    PubMed

    Long, Zhang-De; Xu, Jian-He; Pan, Jiang

    2007-08-01

    Production of an extracellular lipase from Serratia marcescens ECU1010, which is an industrially important biocatalyst for the stereospecific synthesis of Diltiazem precursor, was carefully optimized in both shake flasks and a fermenter, using Tween-80 as the enzyme inducer. Dextrin and beef extract combined with ammonium sulfate were indicated to be the best carbon and nitrogen sources, respectively. With the increase of Tween-80 from 0 to 10 g l-1, the lipase production was greatly enhanced from merely 250 U l-1 to a maximum of 3,340 U l-1, giving the highest lipase yield of ca 640 U g-1 dry cell mass (DCW), although the maximum biomass (6.0 g DCW l-1) was achieved at 15 g l-1 of Tween-80. When the medium loading in shake flasks was reduced from 20 to 10% (v / v), the lipase production was significantly enhanced. The increase in shaking speed also resulted in an improvement of the lipase production, although the cell growth was slightly repressed, suggesting that the increase of dissolved oxygen (DO) concentration contributed to the enhancements of lipase yield. When the lipase fermentation was carried out in a 5-l fermenter, the lipase production reached a new maximum of 11,060 U l-1 by simply raising the aeration rate from 0.5 to 1.0 vvm, while keeping the dissolved oxygen above 20% saturation via intermittent adjustment of the agitation speed (> or =400 rpm), in the presence of a relatively low concentration (2 g l-1) of Tween-80 to prevent a potential foaming problem, which is easy to occur in the intensively aerated fermenter.

  18. Significant contribution of realistic vegetation representation to improved simulation and prediction of climate anomalies over land

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Doblas-Reyes, Francisco; van den Hurk, Bart; Miller, Paul

    2015-04-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation through the coupling with the LPJ-Guess model. In its original formulation, the coupling between atmosphere and vegetation variability is simply operated by the vegetation Leaf Area Index (LAI), which affects climate by only changing the vegetation physiological resistance to evapotranspiration. This coupling with no implied change of the vegetation fractional coverage has been reported to have a weak effect on the surface climate modeled by EC-Earth (e.g.: also Weiss et al. 2012). The effective sub-grid vegetation fractional coverage can vary seasonally and at interannual time-scales as a function of leaf-canopy growth, phenology and senescence, and therefore affect biophysical parameters such as the surface roughness, albedo and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation densitiy to the LAI, based on a Lambert-Beer formulation. By comparing historical 20th century simulations and retrospective forecasts performed applying the new effective fractional-coverage parameterization with the respective reference simulations using the original constant vegetation-fraction, we showed an increased effect of vegetation on the EC-Earth surface climate. The analysis shows considerable sensitivity of EC-Earth surface climate at seasonal to interannual time-scales due to the variability of vegetation effective fractional coverage. Particularly large effects are shown over boreal winter middle-to-high latitudes, where the cooling effect of the new parameterization corrects the warm biases of the control simulations over land. For boreal winter, the realistic representation of vegetation variability leads to a significant improvement of the skill in predicting surface climate over land at seasonal time-scales. A potential predictability experiment extended to longer time-scales also indicates the

  19. Using Gas Modifiers to Significantly Improve Sensitivity and Selectivity in a Cylindrical FAIMS Device

    NASA Astrophysics Data System (ADS)

    Purves, Randy W.; Ozog, Allison R.; Ambrose, Stephen J.; Prasad, Satendra; Belford, Michael; Dunyach, Jean-Jacques

    2014-07-01

    Recent reports describing enhanced performance when using gas additives in a DMS device (planar electrodes) have indicated that comparable benefits are not attainable using FAIMS (cylindrical electrodes), owing to the non-homogeneous electric fields within the analyzer region. In this study, a FAIMS system (having cylindrical electrodes) was modified to allow for controlled delivery of gas additives. An experiment was carried out that illustrates the important distinction between gas modifiers present as unregulated contaminants and modifiers added in a controlled manner. The effect of contamination was simulated by adjusting the ESI needle position to promote incomplete desolvation, thereby permitting ESI solvent vapor into the FAIMS analyzer region, causing signal instability and irreproducible CV values. However, by actively controlling the delivery of the gas modifier, reproducible CV spectra were obtained. The effects of adding different gas modifiers were examined using 15 positive ions having mass-to-charge ( m/z) values between 90 and 734. Significant improvements in peak capacity and increases in ion transmission were readily attained by adding acetonitrile vapor, even at trace levels (≤0.1%). Increases in signal intensity were greatest for the low m/z ions; for the six lowest molecular weight species, signal intensities increased by ˜10- to over 100-fold compared with using nitrogen without gas additives, resulting in equivalent or better signal intensities compared with ESI without FAIMS. These results confirm that analytical benefits derived from the addition of gas modifiers reported with a uniform electric field (DMS) also are observed using a non-homogenous electric field (FAIMS) in the analyser region.

  20. Incorporating the human gene annotations in different databases significantly improved transcriptomic and genetic analyses.

    PubMed

    Chen, Geng; Wang, Charles; Shi, Leming; Qu, Xiongfei; Chen, Jiwei; Yang, Jianmin; Shi, Caiping; Chen, Long; Zhou, Peiying; Ning, Baitang; Tong, Weida; Shi, Tieliu

    2013-04-01

    Human gene annotation is crucial for conducting transcriptomic and genetic studies; however, the impacts of human gene annotations in diverse databases on related studies have been less evaluated. To enable full use of various human annotation resources and better understand the human transcriptome, here we systematically compare the human annotations present in RefSeq, Ensembl (GENCODE), and AceView on diverse transcriptomic and genetic analyses. We found that the human gene annotations in the three databases are far from complete. Although Ensembl and AceView annotated more genes than RefSeq, more than 15,800 genes from Ensembl (or AceView) are within the intergenic and intronic regions of AceView (or Ensembl) annotation. The human transcriptome annotations in RefSeq, Ensembl, and AceView had distinct effects on short-read mapping, gene and isoform expression profiling, and differential expression calling. Furthermore, our findings indicate that the integrated annotation of these databases can obtain a more complete gene set and significantly enhance those transcriptomic analyses. We also observed that many more known SNPs were located within genes annotated in Ensembl and AceView than in RefSeq. In particular, 1033 of 3041 trait/disease-associated SNPs involved in about 200 human traits/diseases that were previously reported to be in RefSeq intergenic regions could be relocated within Ensembl and AceView genes. Our findings illustrate that a more complete transcriptome generated by incorporating human gene annotations in diverse databases can strikingly improve the overall results of transcriptomic and genetic studies.

  1. Registration of the Cone Beam CT and Blue-Ray Scanned Dental Model Based on the Improved ICP Algorithm

    PubMed Central

    Li, Zhenhua; Xu, Songsong; Guo, Xiaoyan

    2014-01-01

    Multimodality image registration and fusion has complementary significance for guiding dental implant surgery. As the needs of the different resolution image registration, we develop an improved Iterative Closest Point (ICP) algorithm that focuses on the registration of Cone Beam Computed Tomography (CT) image and high-resolution Blue-light scanner image. The proposed algorithm includes two major phases, coarse and precise registration. Firstly, for reducing the matching interference of human subjective factors, we extract feature points based on curvature characteristics and use the improved three point's translational transformation method to realize coarse registration. Then, the feature point set and reference point set, obtained by the initial registered transformation, are processed in the precise registration step. Even with the unsatisfactory initial values, this two steps registration method can guarantee the global convergence and the convergence precision. Experimental results demonstrate that the method has successfully realized the registration of the Cone Beam CT dental model and the blue-ray scanner model with higher accuracy. So the method could provide researching foundation for the relevant software development in terms of the registration of multi-modality medical data. PMID:24511309

  2. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  3. An improved teaching-learning based robust edge detection algorithm for noisy images.

    PubMed

    Thirumavalavan, Sasirooba; Jayaraman, Sasikala

    2016-11-01

    This paper presents an improved Teaching Learning Based Optimization (TLO) and a methodology for obtaining the edge maps of the noisy real life digital images. TLO is a population based algorithm that simulates the teaching-learning mechanism in class rooms, comprising two phases of teaching and learning. The 'Teaching Phase' represents learning from the teacher and 'Learning Phase' indicates learning by the interaction between learners. This paper introduces a third phase denoted by "Avoiding Phase" that helps to keep the learners away from the worst students with a view of exploring the problem space more effectively and escaping from the sub-optimal solutions. The improved TLO (ITLO) explores the solution space and provides the global best solution. The edge detection problem is formulated as an optimization problem and solved using the ITLO. The results of real life and medical images illustrate the performance of the developed method.

  4. A hybrid method based on Band Pass Filter and Correlation Algorithm to improve debris sensor capacity

    NASA Astrophysics Data System (ADS)

    Hong, Wei; Wang, Shaoping; Liu, Haokuo; Tomovic, Mileta M.; Chao, Zhang

    2017-01-01

    The inductive debris detection is an effective method for monitoring mechanical wear, and could be used to prevent serious accidents. However, debris detection during early phase of mechanical wear, when small debris (<100 um) is generated, requires that the sensor has high sensitivity with respect to background noise. In order to detect smaller debris by existing sensors, this paper presents a hybrid method which combines Band Pass Filter and Correlation Algorithm to improve sensor signal-to-noise ratio (SNR). The simulation results indicate that the SNR will be improved at least 2.67 times after signal processing. In other words, this method ensures debris identification when the sensor's SNR is bigger than -3 dB. Thus, smaller debris will be detected in the same SNR. Finally, effectiveness of the proposed method is experimentally validated.

  5. Three-Dimensional Path Planning and Guidance of Leg Vascular Based on Improved Ant Colony Algorithm in Augmented Reality.

    PubMed

    Gao, Ming-ke; Chen, Yi-min; Liu, Quan; Huang, Chen; Li, Ze-yu; Zhang, Dian-hua

    2015-11-01

    Preoperative path planning plays a critical role in vascular access surgery. Vascular access surgery has superior difficulties and requires long training periods as well as precise operation. Yet doctors are on different leves, thus bulky size of blood vessels is usually chosen to undergo surgery and other possible optimal path is not considered. Moreover, patients and surgeons will suffer from X-ray radiation during the surgical procedure. The study proposed an improved ant colony algorithm to plan a vascular optimal three-dimensional path with overall consideration of factors such as catheter diameter, vascular length, diameter as well as the curvature and torsion. To protect the doctor and patient from exposing to X-ray long-term, the paper adopted augmented reality technology to register the reconstructed vascular model and physical model meanwhile, locate catheter by the electromagnetic tracking system and used Head Mounted Display to show the planning path in real time and monitor catheter push procedure. The experiment manifests reasonableness of preoperative path planning and proves the reliability of the algorithm. The augmented reality experiment real time and accurately displays the vascular phantom model, planning path and the catheter trajectory and proves the feasibility of this method. The paper presented a useful and feasible surgical scheme which was based on the improved ant colony algorithm to plan vascular three-dimensional path in augmented reality. The study possessed practical guiding significance in preoperative path planning, intraoperative catheter guiding and surgical training, which provided a theoretical method of path planning for vascular access surgery. It was a safe and reliable path planning approach and possessed practical reference value.

  6. Improved Temperature Sounding and Quality Control Methodology Using AIRS/AMSU Data: The AIRS Science Team Version 5 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John M.; Iredell, Lena; Keita, Fricky

    2009-01-01

    This paper describes the AIRS Science Team Version 5 retrieval algorithm in terms of its three most significant improvements over the methodology used in the AIRS Science Team Version 4 retrieval algorithm. Improved physics in Version 5 allows for use of AIRS clear column radiances in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of clear column radiances .R(sub i) for all channels. This new approach allows for the generation of more accurate values of .R(sub i) and T(p) under most cloud conditions. Secondly, Version 5 contains a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 also contains for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology, referred to as AIRS Version 5 AO, was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Results are shown comparing the relative performance of the AIRS Version 4, Version 5, and Version 5 AO for the single day, January 25, 2003. The Goddard DISC is now generating and distributing products derived using the AIRS Science Team Version 5 retrieval algorithm. This paper also described the Quality Control flags contained in the DISC AIRS/AMSU retrieval products and their intended use for scientific research purposes.

  7. Acute Myocardial Infarction Complicated by Cardiogenic Shock: An Algorithm-Based Extracorporeal Membrane Oxygenation Program Can Improve Clinical Outcomes.

    PubMed

    Unai, Shinya; Tanaka, Daizo; Ruggiero, Nicholas; Hirose, Hitoshi; Cavarocchi, Nicholas C

    2016-03-01

    Extracorporeal membrane oxygenation (ECMO) in our institution resulted in near total mortality prior to the establishment of an algorithm-based program in July 2010. We hypothesized that an algorithm-based ECMO program improves the outcome of patients with acute myocardial infarction complicated with cardiogenic shock. Between March 2003 and July 2013, 29 patients underwent emergent catheterization for acute myocardial infarction due to left main or proximal left anterior descending artery occlusion complicated with cardiogenic shock (defined as systolic blood pressure <90 mm Hg despite multiple inotropes, with or without intra-aortic balloon pump, lactic acidosis). Of 29 patients, 15 patients were treated before July 2010 (Group 1, old program), and 14 patients were treated after July 2010 (Group 2, new program). There were no significant differences in the baseline characteristics, including age, sex, coronary risk factors, and left ventricular ejection fraction between the two groups. Cardiopulmonary resuscitation prior to ECMO was performed in two cases (13%) in Group 1 and four cases (29%) in Group 2. ECMO support was performed in one case (6.7%) in Group 1 and six cases (43%) in Group 2. The 30-day survival of Group 1 versus Group 2 was 40 versus 79% (P = 0.03), and 1-year survival rate was 20 versus 56% (P = 0.01). The survival rate for patients who underwent ECMO was 0% in Group 1 versus 83% in Group 2 (P = 0.09). In Group 2, the mean duration on ECMO was 9.8 ± 5.9 days. Of the six patients who required ECMO in Group 2, 100% were successfully weaned off ECMO or were bridged to ventricular assist device implantation. Initiation of an algorithm-based ECMO program improved the outcomes in patients with acute myocardial infarction complicated by cardiogenic shock.

  8. Pro: benchmarking is the absolute prerequisite for timely and significant business process improvement.

    PubMed

    Hill, Bradford T; Workman, Ronald

    2006-11-28

    Benchmarking in industry has been around for nearly a century, helping companies in nearly every sector imaginable improve their overall performance. Benchmarking's importance in health care, and specifically the clinical laboratory, can be summed up in one simple phrase--"If you cannot measure it, you cannot improve it." Here is why.

  9. Optimal algorithm to improve the calculation accuracy of energy deposition for betavoltaic MEMS batteries design

    NASA Astrophysics Data System (ADS)

    Li, Sui-xian; Chen, Haiyang; Sun, Min; Cheng, Zaijun

    2009-11-01

    Aimed at improving the calculation accuracy when calculating the energy deposition of electrons traveling in solids, a method we call optimal subdivision number searching algorithm is proposed. When treating the energy deposition of electrons traveling in solids, large calculation errors are found, we are conscious of that it is the result of dividing and summing when calculating the integral. Based on the results of former research, we propose a further subdividing and summing method. For β particles with the energy in the entire spectrum span, the energy data is set only to be the integral multiple of keV, and the subdivision number is set to be from 1 to 30, then the energy deposition calculation error collections are obtained. Searching for the minimum error in the collections, we can obtain the corresponding energy and subdivision number pairs, as well as the optimal subdivision number. The method is carried out in four kinds of solid materials, Al, Si, Ni and Au to calculate energy deposition. The result shows that the calculation error is reduced by one order with the improved algorithm.

  10. An improved generalized differential evolution algorithm for multi-objective reactive power dispatch

    NASA Astrophysics Data System (ADS)

    Ramesh, S.; Kannan, S.; Baskar, S.

    2012-04-01

    An improved multi-objective generalized differential evolution (I-GDE3) approach to solve optimal reactive power dispatch (ORPD) with multiple and competing objectives is proposed in this article. The objective functions are minimization of real power loss and bus voltage profile improvement. For maintaining good diversity, the concepts of simulated binary crossover (SBX) based recombination and dynamic crowding distance (DCD), are implemented in the GDE3 algorithm. I-GDE3 obtains the Pareto-solution set for ORPD that is impervious to load drifts and perturbations. The performance of the proposed approach is tested in standard IEEE 118-bus and IEEE 300-bus test systems and the result demonstrates the capability of the I-GDE3 algorithm in generating diverse and well distributed Pareto-optimal solutions that are less sensitive to various loading conditions along with load perturbations. The performance of I-GDE3 is compared with respect to multi-objective performance measures namely span, hyper-volume and C-measure. The results show the effectiveness of I-GDE3 and confirm its potential to solve the multi-objective RPD problem.

  11. Numerical Analysis and Improved Algorithms for Lyapunov-Exponent Calculation of Discrete-Time Chaotic Systems

    NASA Astrophysics Data System (ADS)

    He, Jianbin; Yu, Simin; Cai, Jianping

    2016-12-01

    Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

  12. Dual-wavelength retinal images denoising algorithm for improving the accuracy of oxygen saturation calculation

    NASA Astrophysics Data System (ADS)

    Xian, Yong-Li; Dai, Yun; Gao, Chun-Ming; Du, Rui

    2017-01-01

    Noninvasive measurement of hemoglobin oxygen saturation (SO2) in retinal vessels is based on spectrophotometry and spectral absorption characteristics of tissue. Retinal images at 570 and 600 nm are simultaneously captured by dual-wavelength retinal oximetry based on fundus camera. SO2 is finally measured after vessel segmentation, image registration, and calculation of optical density ratio of two images. However, image noise can dramatically affect subsequent image processing and SO2 calculation accuracy. The aforementioned problem remains to be addressed. The purpose of this study was to improve image quality and SO2 calculation accuracy by noise analysis and denoising algorithm for dual-wavelength images. First, noise parameters were estimated by mixed Poisson-Gaussian (MPG) noise model. Second, an MPG denoising algorithm which we called variance stabilizing transform (VST) + dual-domain image denoising (DDID) was proposed based on VST and improved dual-domain filter. The results show that VST + DDID is able to effectively remove MPG noise and preserve image edge details. VST + DDID is better than VST + block-matching and three-dimensional filtering, especially in preserving low-contrast details. The following simulation and analysis indicate that MPG noise in the retinal images can lead to erroneously low measurement for SO2, and the denoised images can provide more accurate grayscale values for retinal oximetry.

  13. Waste Minimization Improvements Achieved Through Six Sigma Analysis Result In Significant Cost Savings

    SciTech Connect

    Mousseau, Jeffrey, D.; Jansen, John, R.; Janke, David, H.; Plowman, Catherine, M.

    2003-02-26

    Improved waste minimization practices at the Department of Energy's (DOE) Idaho National Engineering and Environmental Laboratory (INEEL) are leading to a 15% reduction in the generation of hazardous and radioactive waste. Bechtel, BWXT Idaho, LLC (BBWI), the prime management and operations contractor at the INEEL, applied the Six Sigma improvement process to the INEEL Waste Minimization Program to review existing processes and define opportunities for improvement. Our Six Sigma analysis team: composed of an executive champion, process owner, a black belt and yellow belt, and technical and business team members used this statistical based process approach to analyze work processes and produced ten recommendations for improvement. Recommendations ranged from waste generator financial accountability for newly generated waste to enhanced employee recognition programs for waste minimization efforts. These improvements have now been implemented to reduce waste generation rates and are producing positive results.

  14. An improved optimization algorithm and Bayes factor termination criterion for sequential projection pursuit

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Jarman, Kristin H.; Harvey, Scott D.; Posse, Christian; Wright, Bob W.

    2005-05-28

    A fundamental problem in analysis of highly multivariate spectral or chromatographic data is reduction of dimensionality. Principal components analysis (PCA), concerned with explaining the variance-covariance structure of the data, is a commonly used approach to dimension reduction. Recently an attractive alternative to PCA, sequential projection pursuit (SPP), has been introduced. Designed to elicit clustering tendencies in the data, SPP may be more appropriate when performing clustering or classification analysis. However, the existing genetic algorithm (GA) implementation of SPP has two shortcomings, computation time and inability to determine the number of factors necessary to explain the majority of the structure in the data. We address both these shortcomings. First, we introduce a new SPP algorithm, a random scan sampling algorithm (RSSA), that significantly reduces computation time. We compare the computational burden of the RSS and GA implementation for SPP on a dataset containing Raman spectra of twelve organic compounds. Second, we propose a Bayes factor criterion, BFC, as an effective measure for selecting the number of factors needed to explain the majority of the structure in the data. We compare SPP to PCA on two datasets varying in type, size, and difficulty; in both cases SPP achieves a higher accuracy with a lower number of latent variables.

  15. All-digital demodulation system of interferometric fiber optic sensors using an improved PGC algorithm based on fundamental frequency mixing

    NASA Astrophysics Data System (ADS)

    Zhang, Ai-ling; Wang, Kai-han; Zhang, Shuai; Wang, Yan

    2015-05-01

    We present an all-digital demodulation system of interferometric fiber optic sensor based on an improved arctangent-differential-self-multiplying (arctan-DSM) algorithm. The total harmonic distortion (THD) and the light intensity disturbance (LID) are also suppressed, the same as those in the traditional arctan-DSM algorithm. Moreover, the lowest sampling frequency is also reduced by introducing anti-aliasing filter, so the occupation of the system memory is reduced. The simulations show that the improved algorithm can correctly demodulate cosine signal and chirp signal with lower sampling frequency.

  16. A Method for Streamlining and Assessing Sound Velocity Profiles Based on Improved D-P Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, D.; WU, Z. Y.; Zhou, J.

    2015-12-01

    A multi-beam system transmits sound waves and receives the round-trip time of their reflection or scattering, and thus it is possible to determine the depth and coordinates of the detected targets using the sound velocity profile (SVP) based on Snell's Law. The SVP is determined by a device. Because of the high sampling rate of the modern device, the operational time of ray tracing and beam footprint reduction will increase, lowering the overall efficiency. To promote the timeliness of multi-beam surveys and data processing, redundant points in the original SVP must be screened out and at the same time, errors following the streamlining of the SVP must be evaluated and controlled. We presents a new streamlining and evaluation method based on the Maximum Offset of sound Velocity (MOV) algorithm. Based on measured SVP data, this method selects sound velocity data points by calculating the maximum distance to the sound-velocity-dimension based on an improved Douglas-Peucker Algorithm to streamline the SVP (Fig. 1). To evaluate whether the streamlined SVP meets the desired accuracy requirements, this method is divided into two parts: SVP streamlining, and an accuracy analysis of the multi-beam sounding data processing using the streamlined SVP. Therefore, the method is divided into two modules: the streamlining module and the evaluation module (Fig. 2). The streamlining module is used for streamlining the SVP. Its core is the MOV algorithm.To assess the accuracy of the streamlined SVP, we uses ray tracing and the percentage error analysis method to evaluate the accuracy of the sounding data both before and after streamlining the SVP (Fig. 3). By automatically optimizing the threshold, the reduction rate of sound velocity profile data can reach over 90% and the standard deviation percentage error of sounding data can be controlled to within 0.1% (Fig. 4). The optimized sound velocity profile data improved the operational efficiency of the multi-beam survey and data post

  17. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  18. Meta-analysis of randomized controlled trials reveals an improved clinical outcome of using genotype plus clinical algorithm for warfarin dosing.

    PubMed

    Liao, Zhenqi; Feng, Shaoguang; Ling, Peng; Zhang, Guoqing

    2015-02-01

    Previous studies have raised interest in using the genotyping of CYP2C9 and VKORC1 to guide warfarin dosing. However, there is lack of solid evidence to prove that genotype plus clinical algorithm provides improved clinical outcomes than the single clinical algorithm. The results of recent reported clinical trials are paradoxical and needs to be systematically evaluated. In this study, we aim to assess whether genotype plus clinical algorithm of warfarin is superior to the single clinical algorithm through a meta-analysis of randomized controlled trials (RCTs). All relevant studies from PubMed and reference lists from Jan 1, 1995 to Jan 13, 2014 were extracted and screened. Eligible studies included randomized trials that compared clinical plus pharmacogenetic algorithms group to single clinical algorithm group using adult (≥ 18 years) patients with disease conditions that require warfarin use. We further used fix-effect models to calculate the mean difference or the risk ratios (RRs) and 95% CIs to analyze the extracted data. The statistical heterogeneity was calculated using I(2). The percentage of time within the therapeutic INR range was considered to be the primary clinical outcome. The initial search strategy identified 50 citations and 7 trials were eligible. These seven trials included 1,910 participants, including 960 patients who received genotype plus clinical algorithm of warfarin dosing and 950 patients who received clinical algorithm only. We discovered that the percentage of time within the therapeutic INR range of the genotype-guided group was improved compared with the standard group in the RCTs when the initial standard dose was fixed (95% CI 0.09-0.40; I(2) = 47.8%). However, for the studies using non-fixed initial doses, the genotype-guided group failed to exhibit statistically significant outcome compared to the standard group. No significant difference was observed in the incidences of adverse events (RR 0.94, 95% CI 0.84-1.04; I(2) = 0%, p

  19. Residual Elimination Algorithm Enhancements to Improve Foot Motion Tracking During Forward Dynamic Simulations of Gait.

    PubMed

    Jackson, Jennifer N; Hass, Chris J; Fregly, Benjamin J

    2015-11-01

    Patient-specific gait optimizations capable of predicting post-treatment changes in joint motions and loads could improve treatment design for gait-related disorders. To maximize potential clinical utility, such optimizations should utilize full-body three-dimensional patient-specific musculoskeletal models, generate dynamically consistent gait motions that reproduce pretreatment marker measurements closely, and achieve accurate foot motion tracking to permit deformable foot-ground contact modeling. This study enhances an existing residual elimination algorithm (REA) Remy, C. D., and Thelen, D. G., 2009, “Optimal Estimation of Dynamically Consistent Kinematics and Kinetics for Forward Dynamic Simulation of Gait,” ASME J. Biomech. Eng., 131(3), p. 031005) to achieve all three requirements within a single gait optimization framework. We investigated four primary enhancements to the original REA: (1) manual modification of tracked marker weights, (2) automatic modification of tracked joint acceleration curves, (3) automatic modification of algorithm feedback gains, and (4) automatic calibration of model joint and inertial parameter values. We evaluated the enhanced REA using a full-body three-dimensional dynamic skeletal model and movement data collected from a subject who performed four distinct gait patterns: walking, marching, running, and bounding. When all four enhancements were implemented together, the enhanced REA achieved dynamic consistency with lower marker tracking errors for all segments, especially the feet (mean root-mean-square (RMS) errors of 3.1 versus 18.4 mm), compared to the original REA. When the enhancements were implemented separately and in combinations, the most important one was automatic modification of tracked joint acceleration curves, while the least important enhancement was automatic modification of algorithm feedback gains. The enhanced REA provides a framework for future gait optimization studies that seek to predict subject

  20. Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms

    PubMed Central

    Helms, Lucas; Clune, Jeff

    2017-01-01

    Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding. PMID:28334002

  1. A novel super-resolution image fusion algorithm based on improved PCNN and wavelet transform

    NASA Astrophysics Data System (ADS)

    Liu, Na; Gao, Kun; Song, Yajun; Ni, Guoqiang

    2009-10-01

    Super-resolution reconstruction technology is to explore new information between the under-sampling image series obtained from the same scene and to achieve the high-resolution picture through image fusion in sub-pixel level. The traditional super-resolution fusion methods for sub-sampling images need motion estimation and motion interpolation and construct multi-resolution pyramid to obtain high-resolution, yet the function of the human beings' visual features are ignored. In this paper, a novel resolution reconstruction for under-sampling images of static scene based on the human vision model is considered by introducing PCNN (Pulse Coupled Neural Network) model, which simplifies and improves the input model, internal behavior and control parameters selection. The proposed super-resolution image fusion algorithm based on PCNN-wavelet is aimed at the down-sampling image series in a static scene. And on the basis of keeping the original features, we introduce Relief Filter(RF) to the control and judge segment to overcome the effect of random factors(such as noise, etc) effectively to achieve the aim that highlighting interested object though the fusion. Numerical simulations show that the new algorithm has the better performance in retaining more details and keeping high resolution.

  2. Improved Data Preprocessing Algorithm for Time-Domain Induced Polarization Method with Digital Notch Filter

    NASA Astrophysics Data System (ADS)

    Ge, Shuang-Chao; Deng, Ming; Chen, Kai; Li, Bin; Li, Yuan

    2016-12-01

    Time-domain induced polarization (TDIP) measurement is seriously affected by power line interference and other field noise. Moreover, existing TDIP instruments generally output only the apparent chargeability, without providing complete secondary field information. To increase the robustness of TDIP method against interference and obtain more detailed secondary field information, an improved dataprocessing algorithm is proposed here. This method includes an efficient digital notch filter which can effectively eliminate all the main components of the power line interference. Hardware model of this filter was constructed and Vhsic Hardware Description Language code for it was generated using Digital Signal Processor Builder. In addition, a time-location method was proposed to extract secondary field information in case of unexpected data loss or failure of the synchronous technologies. Finally, the validity and accuracy of the method and the notch filter were verified by using the Cole-Cole model implemented by SIMULINK software. Moreover, indoor and field tests confirmed the application effect of the algorithm in the fieldwork.

  3. An improved hybrid encoding cuckoo search algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Jia, Ke; He, Yichao

    2014-01-01

    Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions.

  4. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    PubMed

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.

  5. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    PubMed Central

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  6. An Improved Hybrid Encoding Cuckoo Search Algorithm for 0-1 Knapsack Problems

    PubMed Central

    Feng, Yanhong; Jia, Ke; He, Yichao

    2014-01-01

    Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions. PMID:24527026

  7. MO-FG-204-05: Evaluation of a Novel Algorithm for Improved 4DCT Resolution

    SciTech Connect

    Glide-Hurst, C; Briceno, J; Chetty, I. J.; Klahr, P

    2015-06-15

    Purpose: Accurate tumor motion characterization is critical for increasing the therapeutic ratio of radiation therapy. To accommodate the divergent fan-beam geometry of the scanner, the current 4D-CT algorithm utilizes a larger temporal window to ensure that pixel values are valid throughout the entire FOV. To minimize the impact on temporal resolution, a cos{sup 2} weighting is employed. We propose a novel exponential weighting (“exponential”) 4DCT reconstruction algorithm that has a sharper slope and provides a more optimal temporal resolution. Methods: A respiratory motion platform translated a lung-mimicking Styrofoam slab with several high and low-contrast inserts 2 cm in the superior-inferior direction. Breathing rates (10–15 bpm) and couch pitch (0.06–0.1 A.U.) were varied to assess interplay between parameters. Multi-slice helical 4DCTs were acquired with 0.5 sec gantry rotation and data were reconstructed with cos{sup 2} and exponential weighting. Mean and standard deviation were calculated via region of interest analysis. Intensity profiles evaluated object boundaries. Retrospective raw data reconstructions were performed for both 4DCT algorithms for 3 liver and lung cancer patients. Image quality (temporal blurring/sharpness) and subtraction images were compared between reconstructions. Results: In the phantom, profile analysis revealed that sharper boundaries were obtained with exponential reconstructions at transitioning breathing phases (i.e. mid-inhale or mid-exhale). Reductions in full-width half maximum were ∼1 mm in the superior-inferior direction and appreciable sharpness could be observed in difference maps. This reduction also yielded a slight reduction in target volume between reconstruction algorithms. For patient cases, coronal views showed less blurring at object boundaries and local intensity differences near the tumor and diaphragm with exponential weighted reconstruction. Conclusion: Exponential weighted 4DCT offers potential

  8. An Improved Artificial Bee Colony Algorithm Based on Balance-Evolution Strategy for Unmanned Combat Aerial Vehicle Path Planning

    PubMed Central

    Gong, Li-gang; Yang, Wen-lun

    2014-01-01

    Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms. PMID:24790555

  9. An improved artificial bee colony algorithm based on balance-evolution strategy for unmanned combat aerial vehicle path planning.

    PubMed

    Li, Bai; Gong, Li-gang; Yang, Wen-lun

    2014-01-01

    Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms.

  10. An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm.

    PubMed

    Xu, Hongming; Lu, Cheng; Mandal, Mrinal

    2014-09-01

    In this paper, we propose an efficient method for segmenting cell nuclei in the skin histopathological images. The proposed technique consists of four modules. First, it separates the nuclei regions from the background with an adaptive threshold technique. Next, an elliptical descriptor is used to detect the isolated nuclei with elliptical shapes. This descriptor classifies the nuclei regions based on two ellipticity parameters. Nuclei clumps and nuclei with irregular shapes are then localized by an improved seed detection technique based on voting in the eroded nuclei regions. Finally, undivided nuclei regions are segmented by a marked watershed algorithm. Experimental results on 114 different image patches indicate that the proposed technique provides a superior performance in nuclei detection and segmentation.

  11. Genetic algorithm with an improved fitness function for (N)ARX modelling

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Worden, K.; Peng, P.; Leung, A. Y. T.

    2007-02-01

    In this article a new fitness function is introduced in an attempt to improve the quality of the auto-regressive with exogenous inputs (ARX) model using a genetic algorithm (GA). The GA is employed to identify the coefficients and the number of time lags of the models of dynamic systems with the new fitness function which is based on the prediction error and the correlation functions between the prediction error and the input and output signals. The new fitness function provides the GA with a better performance in the evolution process. Two examples of the ARX modelling of a linear and a non-linear (NARX) simulated dynamic system are examined using the proposed fitness function.

  12. [An improved N-FINDR endmember extraction algorithm based on manifold learning and spatial information].

    PubMed

    Tang, Xiao-yan; Gao, Kun; Ni, Guo-qiang; Zhu, Zhen-yu; Cheng, Hao-bo

    2013-09-01

    An improved N-FINDR endmember extraction algorithm by combining manifold learning and spatial information is presented under nonlinear mixing assumptions. Firstly, adaptive local tangent space alignment is adapted to seek potential intrinsic low-dimensional structures of hyperspectral high-diemensional data and reduce original data into a low-dimensional space. Secondly, spatial preprocessing is used by enhancing each pixel vector in spatially homogeneous areas, according to the continuity of spatial distribution of the materials. Finally, endmembers are extracted by looking for the largest simplex volume. The proposed method can increase the precision of endmember extraction by solving the nonlinearity of hyperspectral data and taking advantage of spatial information. Experimental results on simulated and real hyperspectral data demonstrate that the proposed approach outperformed the geodesic simplex volume maximization (GSVM), vertex component analysis (VCA) and spatial preprocessing N-FINDR method (SPPNFINDR).

  13. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  14. Exponential H ∞ Synchronization of Chaotic Cryptosystems Using an Improved Genetic Algorithm

    PubMed Central

    Hsiao, Feng-Hsiag

    2015-01-01

    This paper presents a systematic design methodology for neural-network- (NN-) based secure communications in multiple time-delay chaotic (MTDC) systems with optimal H ∞ performance and cryptography. On the basis of the Improved Genetic Algorithm (IGA), which is demonstrated to have better performance than that of a traditional GA, a model-based fuzzy controller is then synthesized to stabilize the MTDC systems. A fuzzy controller is synthesized to not only realize the exponential synchronization, but also achieve optimal H ∞ performance by minimizing the disturbance attenuation level. Furthermore, the error of the recovered message is stated by using the n-shift cipher and key. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach. PMID:26366432

  15. Develop algorithms to improve detectability of defects in Sonic IR imaging NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2016-02-01

    Sonic Infrared (IR) technology is relative new in the NDE family. It is a fast, wide area imaging method. It combines ultrasound excitation and infrared imaging while the former to apply ultrasound energy thus induce friction heating in defects and the latter to capture the IR emission from the target. This technology can detect both surface and subsurface defects such as cracks and disbands/delaminations in various materials, metal/metal alloy or composites. However, certain defects may results in only very small IR signature be buried in noise or heating patterns. In such cases, to effectively extract the defect signals becomes critical in identifying the defects. In this paper, we will present algorithms which are developed to improve the detectability of defects in Sonic IR.

  16. Gene selection approach based on improved swarm intelligent optimisation algorithm for tumour classification.

    PubMed

    Jin, Cong; Jin, Shu-Wei

    2016-06-01

    A number of different gene selection approaches based on gene expression profiles (GEP) have been developed for tumour classification. A gene selection approach selects the most informative genes from the whole gene space, which is an important process for tumour classification using GEP. This study presents an improved swarm intelligent optimisation algorithm to select genes for maintaining the diversity of the population. The most essential characteristic of the proposed approach is that it can automatically determine the number of the selected genes. On the basis of the gene selection, the authors construct a variety of the tumour classifiers, including the ensemble classifiers. Four gene datasets are used to evaluate the performance of the proposed approach. The experimental results confirm that the proposed classifiers for tumour classification are indeed effective.

  17. Improvement of fluorescence-enhanced optical tomography with improved optical filtering and accurate model-based reconstruction algorithms.

    PubMed

    Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I-Chih; Rasmussen, John C; Sevick-Muraca, Eva M

    2011-12-01

    The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.

  18. Improvement of fluorescence-enhanced optical tomography with improved optical filtering and accurate model-based reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I.-Chih; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-12-01

    The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.

  19. Improvements to the OMI Near-uv Aerosol Algorithm Using A-train CALIOP and AIRS Observations

    NASA Technical Reports Server (NTRS)

    Torres, O.; Ahn, C.; Zhong, C.

    2014-01-01

    The height of desert dust and carbonaceous aerosols layers and, to a lesser extent, the difficulty in assessing the predominant size mode of these absorbing aerosol types, are sources of uncertainty in the retrieval of aerosol properties from near UV satellite observations. The availability of independent, near-simultaneous measurements of aerosol layer height, and aerosol-type related parameters derived from observations by other A-train sensors, makes possible the direct use of these parameters as input to the OMI (Ozone Monitoring Instrument) near UV retrieval algorithm. A monthly climatology of aerosol layer height derived from observations by the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) sensor, and real-time AIRS (Atmospheric Infrared Sounder) CO observations are used in an upgraded version of the OMI near UV aerosol algorithm. AIRS CO measurements are used as a reliable tracer of carbonaceous aerosols, which allows the identification of smoke layers in areas and times of the year where the dust-smoke differentiation is difficult in the near-UV. The use of CO measurements also enables the identification of elevated levels of boundary layer pollution undetectable by near UV observations alone. In this paper we discuss the combined use of OMI, CALIOP and AIRS observations for the characterization of aerosol properties, and show a significant improvement in OMI aerosol retrieval capabilities.

  20. Neonatal diagnosis of severe combined immunodeficiency leads to significantly improved survival outcome: the case for newborn screening.

    PubMed

    Brown, Lucinda; Xu-Bayford, Jinhua; Allwood, Zoe; Slatter, Mary; Cant, Andrew; Davies, E Graham; Veys, Paul; Gennery, Andrew R; Gaspar, H Bobby

    2011-03-17

    Severe combined immunodeficiency (SCID) carries a poor prognosis without definitive treatment by hematopoietic stem cell transplantation. The outcome for transplantation varies and is dependent on donor status and the condition of the child at the time of transplantation. Diagnosis at birth may allow for better protection of SCID babies from infection and improve transplantation outcome. In this comparative study conducted at the 2 designated SCID transplantation centers in the United Kingdom, we show that SCID babies diagnosed at birth because of a positive family history have a significantly improved outcome compared with the first presenting family member. The overall improved survival of more than 90% is related to a reduced rate of infection and significantly improved transplantation outcome irrespective of donor choice, conditioning regimen used, and underlying genetic diagnosis. Neonatal screening for SCID would significantly improve the outcome in this otherwise potentially devastating condition.

  1. Improvements of COMS Land Surface Temperature Retrieval Algorithm by considering diurnal variations of boundary layer temperature

    NASA Astrophysics Data System (ADS)

    Choi, Y. Y.; Suh, M. S.

    2015-12-01

    National Meteorological Satellite Centre in Republic of Korea retrieves operationally land surface temperature (LST) by applying the split-window LST algorithm (CSW_v1.0) from Communication, Ocean, and Meteorological Satellite (COMS) data. In order to improve COMS LST accuracy, Cho et al. (2015) developed six types of LST retrieval equations (CSW_v2.0) by considering temperature lapse rate and water vapor/aerosol effect. Similar to CSW_v1.0, the LST retrieved by CSW_v2.0 had a correlation coefficient of 0.99 with the prescribed LST and the root mean square error (RMSE) improved from 1.41 K to 1.39 K. However, CSW_v2.0 showed relatively poor performance, in particular, the temperature lapse rate is certainly large (superadiabatic cases during daytime or strong inversion cases during early morning). In this study, we upgraded the CSW_v2.0 by considering diurnal variations of boundary layer temperature to reduce the relatively large errors under the large lapse rate conditions. To achieve the goals, the diurnal variations of air temperature along with the land surface temperature are included during radiative transfer simulations for the generation of the pseudo-match-up database. The preliminary analysis results showed that RMSE and bias are reduced from 1.39K to 1.14K and from -0.03K to -0.01K. In this presentation, we will show the detailed results of LST retrieval using new algorithms according to the viewing geometry, temperature lapse rate condition, and water vapour amount along with the intercomparison results with MODIS LST data.

  2. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  3. Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit

    SciTech Connect

    Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph

    2015-09-01

    The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.

  4. AERONET Version 3 Release: Providing Significant Improvements for Multi-Decadal Global Aerosol Database and Near Real-Time Validation

    NASA Technical Reports Server (NTRS)

    Holben, Brent; Slutsker, Ilya; Giles, David; Eck, Thomas; Smirnov, Alexander; Sinyuk, Aliaksandr; Schafer, Joel; Sorokin, Mikhail; Rodriguez, Jon; Kraft, Jason; Scully, Amy

    2016-01-01

    Aerosols are highly variable in space, time and properties. Global assessment from satellite platforms and model predictions rely on validation from AERONET, a highly accurate ground-based network. Ver. 3 represents a significant improvement in accuracy and quality.

  5. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-06-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM).

  6. A significant improvement of luminance vs current density efficiency of a BioLED

    NASA Astrophysics Data System (ADS)

    Grykien, Remigiusz; Luszczynska, Beata; Glowacki, Ireneusz; Ulanski, Jacek; Kajzar, Francois; Zgarian, Roxana; Rau, Ileana

    2014-04-01

    We report on fabrication and characterization of an organic light emitting diode by incorporating a pure DNA as electron blocking layer. As emission layer a thin film of phosphorescent Ir(ppy)3 luminophore, embedded in the poly(N-vinylcarbazole) (PVK)/2-(4-tert-butylphenyl)-5-(4-biphenylyl)-1,3,4-oxadiazole (PBD) is used. The BioLED shows a good stability and its luminance efficiency vs current density is improved by ca 40% in comparison with the case without EBL.

  7. The Use (and Misuse) of Statistical Significance Testing: Some Recommendations for Improved Editorial Policy and Practice.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    This paper evaluates the logic underlying various criticisms of statistical significance testing and makes specific recommendations for scientific and editorial practice that might better increase the knowledge base. Reliance on the traditional hypothesis testing model has led to a major bias against nonsignificant results and to misinterpretation…

  8. Significant Advancements in Technology to Improve Instruction for All Students: Including Those with Disabilities

    ERIC Educational Resources Information Center

    Meyen, Edward

    2015-01-01

    Sharing thoughts on what represents significant advancements involving the education of persons for whom typical instruction is not effective seems simple enough. You think about the work you are engaged in and reflect on how you came to do what you are doing. If you have a record of being persistent in your work, then that becomes the context for…

  9. Targeted agri-environment schemes significantly improve the population size of common farmland bumblebee species.

    PubMed

    Wood, Thomas J; Holland, John M; Hughes, William O H; Goulson, Dave

    2015-04-01

    Changes in agricultural practice across Europe and North America have been associated with range contractions and local extinction of bumblebees (Bombus spp.). A number of agri-environment schemes have been implemented to halt and reverse these declines, predominantly revolving around the provision of additional forage plants. Although it has been demonstrated that these schemes can attract substantial numbers of foraging bumblebees, it remains unclear to what extent they actually increase bumblebee populations. We used standardized transect walks and molecular techniques to compare the size of bumblebee populations between Higher Level Stewardship (HLS) farms implementing pollinator-friendly schemes and Entry Level Stewardship (ELS) control farms. Bumblebee abundance on the transect walks was significantly higher on HLS farms than ELS farms. Molecular analysis suggested maximum foraging ranges of 566 m for Bombus hortorum, 714 m for B. lapidarius, 363 m for B. pascuorum and 799 m for B. terrestris. Substantial differences in maximum foraging range were found within bumblebee species between farm types. Accounting for foraging range differences, B. hortorum (47 vs 13 nests/km(2) ) and B. lapidarius (45 vs 22 nests/km(2) ) were found to nest at significantly greater densities on HLS farms than ELS farms. There were no significant differences between farm type for B. terrestris (88 vs 38 nests/km(2) ) and B. pascuorum (32 vs 39 nests/km(2) ). Across all bumblebee species, HLS management had a significantly positive effect on bumblebee nest density. These results show that targeted agri-environment schemes that increase the availability of suitable forage can significantly increase the size of wild bumblebee populations.

  10. Simulation of Long Lived Tracers Using an Improved Empirically Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal

  11. Nitrite addition to acidified sludge significantly improves digestibility, toxic metal removal, dewaterability and pathogen reduction

    NASA Astrophysics Data System (ADS)

    Du, Fangzhou; Keller, Jürg; Yuan, Zhiguo; Batstone, Damien J.; Freguia, Stefano; Pikaar, Ilje

    2016-12-01

    Sludge management is a major issue for water utilities globally. Poor digestibility and dewaterability are the main factors determining the cost for sludge management, whereas pathogen and toxic metal concentrations limit beneficial reuse. In this study, the effects of low level nitrite addition to acidified sludge to simultaneously enhance digestibility, toxic metal removal, dewaterability and pathogen reduction were investigated. Waste activated sludge (WAS) from a full-scale waste water treatment plant was treated at pH 2 with 10 mg NO2‑-N/L for 5 h. Biochemical methane potential tests showed an increase in the methane production of 28%, corresponding to an improvement from 247 ± 8 L CH4/kg VS to 317 ± 1 L CH4/kg VS. The enhanced removal of toxic metals further increased the methane production by another 18% to 360 ± 6 L CH4/kg VS (a total increase of 46%). The solids content of dewatered sludge increased from 14.6 ± 1.4% in the control to 18.2 ± 0.8%. A 4-log reduction for both total coliforms and E. coli was achieved. Overall, this study highlights the potential of acidification with low level nitrite addition as an effective and simple method achieving multiple improvements in terms of sludge management.

  12. Nitrite addition to acidified sludge significantly improves digestibility, toxic metal removal, dewaterability and pathogen reduction

    PubMed Central

    Du, Fangzhou; Keller, Jürg; Yuan, Zhiguo; Batstone, Damien J.; Freguia, Stefano; Pikaar, Ilje

    2016-01-01

    Sludge management is a major issue for water utilities globally. Poor digestibility and dewaterability are the main factors determining the cost for sludge management, whereas pathogen and toxic metal concentrations limit beneficial reuse. In this study, the effects of low level nitrite addition to acidified sludge to simultaneously enhance digestibility, toxic metal removal, dewaterability and pathogen reduction were investigated. Waste activated sludge (WAS) from a full-scale waste water treatment plant was treated at pH 2 with 10 mg NO2−-N/L for 5 h. Biochemical methane potential tests showed an increase in the methane production of 28%, corresponding to an improvement from 247 ± 8 L CH4/kg VS to 317 ± 1 L CH4/kg VS. The enhanced removal of toxic metals further increased the methane production by another 18% to 360 ± 6 L CH4/kg VS (a total increase of 46%). The solids content of dewatered sludge increased from 14.6 ± 1.4% in the control to 18.2 ± 0.8%. A 4-log reduction for both total coliforms and E. coli was achieved. Overall, this study highlights the potential of acidification with low level nitrite addition as an effective and simple method achieving multiple improvements in terms of sludge management. PMID:28004811

  13. Significant improvement of mouse cloning technique by treatment with trichostatin A after somatic nuclear transfer

    SciTech Connect

    Kishigami, Satoshi . E-mail: kishigami@cdb.riken.jp; Mizutani, Eiji; Ohta, Hiroshi; Hikichi, Takafusa; Thuan, Nguyen Van; Wakayama, Sayaka; Bui, Hong-Thuy; Wakayama, Teruhiko

    2006-02-03

    The low success rate of animal cloning by somatic cell nuclear transfer (SCNT) is believed to be associated with epigenetic errors including abnormal DNA hypermethylation. Recently, we elucidated by using round spermatids that, after nuclear transfer, treatment of zygotes with trichostatin A (TSA), an inhibitor of histone deacetylase, can remarkably reduce abnormal DNA hypermethylation depending on the origins of transferred nuclei and their genomic regions [S. Kishigami, N. Van Thuan, T. Hikichi, H. Ohta, S. Wakayama. E. Mizutani, T. Wakayama, Epigenetic abnormalities of the mouse paternal zygotic genome associated with microinsemination of round spermatids, Dev. Biol. (2005) in press]. Here, we found that 5-50 nM TSA-treatment for 10 h following oocyte activation resulted in more efficient in vitro development of somatic cloned embryos to the blastocyst stage from 2- to 5-fold depending on the donor cells including tail tip cells, spleen cells, neural stem cells, and cumulus cells. This TSA-treatment also led to more than 5-fold increase in success rate of mouse cloning from cumulus cells without obvious abnormality but failed to improve ES cloning success. Further, we succeeded in establishment of nuclear transfer-embryonic stem (NT-ES) cells from TSA-treated cloned blastocyst at a rate three times higher than those from untreated cloned blastocysts. Thus, our data indicate that TSA-treatment after SCNT in mice can dramatically improve the practical application of current cloning techniques.

  14. The community-based Health Extension Program significantly improved contraceptive utilization in West Gojjam Zone, Ethiopia

    PubMed Central

    Yitayal, Mezgebu; Berhane, Yemane; Worku, Alemayehu; Kebede, Yigzaw

    2014-01-01

    Background Ethiopia has implemented a nationwide primary health program at grassroots level (known as the Health Extension Program) since 2003 to increase public access to basic health services. This study was conducted to assess whether households that fully implemented the Health Extension Program have improved current contraceptive use. Methods A cross-sectional community-based survey was conducted to collect data from 1,320 mothers using a structured questionnaire. A multivariate logistic regression was used to identify the predictors of current contraceptive utilization. A propensity score analysis was used to determine the contribution of the Health Extension Program “model households” on current contraceptive utilization. Result Mothers from households which fully benefited from the Health Extension Program (“model households”) were 3.97 (adjusted odds ratio, 3.97; 95% confidence interval, 3.01–5.23) times more likely to use contraceptives compared with mothers from non-model households. Model household status contributed to 29.3% (t=7.08) of the increase in current contraceptive utilization. Conclusion The Health Extension Program when implemented fully could help to increase the utilization of contraceptives in the rural community and improve family planning. PMID:24868165

  15. Melatonin significantly improves the developmental competence of bovine somatic cell nuclear transfer embryos.

    PubMed

    Su, Jianmin; Wang, Yongsheng; Xing, Xupeng; Zhang, Lei; Sun, Hongzheng; Zhang, Yong

    2015-11-01

    Somatic cell nuclear transfer (SCNT) is a promising technology, but its application is hampered by its low efficiency. Hence, the majority of SCNT embryos fail to develop to term. In this study, the antioxidant melatonin reduced apoptosis and reactive oxygen species (ROS) in bovine SCNT embryos. It also increased cell number, inner cell mass (ICM) cell numbers, and the ratio of ICM to total cells while improving the development of bovine SCNT embryos in vitro and in vivo. Gene expression analysis showed that melatonin suppressed the expression of the pro-apoptotic genes p53 and Bax and stimulated the expression of the antioxidant genes SOD1 and Gpx4, the anti-apoptotic gene BCL2L1, and the pluripotency-related gene SOX2 in SCNT blastocysts. We also analyzed the epigenetic modifications in bovine in vitro fertilization, melatonin-treated, and untreated SCNT embryos. The global H3K9ac levels of melatonin-treated SCNT embryos at the four-cell stage were higher than those of the untreated SCNT embryos. We conclude that exogenous melatonin affects the expression of genes related to apoptosis, antioxidant function, and development. Moreover, melatonin reduced apoptosis and ROS in bovine SCNT embryos and enhanced blastocyst quality, thereby ultimately improving bovine cloning efficiency.

  16. Demonstration of on Sky Contrast Improvement Using the Modified Gerchberg-Saxton Algorithm at the Palomar Observatory

    NASA Technical Reports Server (NTRS)

    Burruss, Rick S.; Serabyn, Eugene; Mawet, Dimitri P.; Roberts, Jennifer E.; Hickey, Jeffrey P.; Rykoski, Kevin; Bikkannavar, Siddarayappa; Crepp, Justin R.

    2010-01-01

    We have successfully demonstrated significant improvements in the high contrast detection limit of the Well-Corrected Subaperture (WCS) using the Autonomous Phase Retrieval Calibration (APRC) software package developed at the Jet Propulsion Laboratory (JPL) for the Palomar adaptive optics instrument (PALAO). APRC utilizes the Modified Gerchberg-Saxton (MGS) wavefront sensing algorithm, also developed at JPL. The WCS delivers such excellent correction of the atmosphere that non-common path (NCP) wavefront errors not sensed by PALAO but present at the coronagraphic image plane begin to factor heavily as a limit to contrast. We have implemented the APRC program to reduce these NCP wavefront errors from 110 nm to 35 nm (rms) in the lab, and we have extended these exceptional results to targets on the sky for the first time, leading to a significant suppression of speckle noise. Consequently we now report a contrast level of very nearly 1x10(exp -4) at separations of 2 lambda/D before the data is post processed. We describe here the major components of our instrument, the work done to improve the NCP wavefront errors, and the ensuing excellent on sky results, including the detection of the three exoplanets orbiting the star HR8799.

  17. Improved Methodology for Surface and Atmospheric Soundings, Error Estimates, and Quality Control Procedures: the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2014-01-01

    The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.

  18. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Sen, Ramazan Sonat; Smith, Curtis Lee

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  19. An Improved Algorithm for Linear Inequalities in Pattern Recognition and Switching Theory.

    ERIC Educational Resources Information Center

    Geary, Leo C.

    This thesis presents a new iterative algorithm for solving an n by l solution vector w, if one exists, to a set of linear inequalities, A w greater than zero which arises in pattern recognition and switching theory. The algorithm is an extension of the Ho-Kashyap algorithm, utilizing the gradient descent procedure to minimize a criterion function…

  20. Hypoxic-Preconditioned Bone Marrow Stem Cell Medium Significantly Improves Outcome After Retinal Ischemia in Rats

    PubMed Central

    Roth, Steven; Dreixler, John C.; Mathew, Biji; Balyasnikova, Irina; Mann, Jacob R.; Boddapati, Venkat; Xue, Lai; Lesniak, Maciej S.

    2016-01-01

    Purpose We have previously demonstrated the protective effect of bone marrow stem cell (BMSC)-conditioned medium in retinal ischemic injury. We hypothesized here that hypoxic preconditioning of stem cells significantly enhances the neuroprotective effect of the conditioned medium and thereby augments the protective effect in ischemic retina. Methods Rats were subjected to retinal ischemia by increasing intraocular pressure to 130 to 135 mm Hg for 55 minutes. Hypoxic-preconditioned, hypoxic unconditioned, or normoxic medium was injected into the vitreous 24 hours after ischemia ended. Recovery was assessed 7 days after injections by comparing electroretinography measurements, histologic examination, and apoptosis (TUNEL, terminal deoxynucleotidyl transferase–mediated dUTP nick end labeling assay). To compare proteins secreted into the medium in the groups and the effect of hypoxic exposure, we used rat cytokine arrays. Results Eyes injected with hypoxic BMSC–conditioned medium 24 hours after ischemia demonstrated significantly enhanced return of retinal function, decreased retinal ganglion cell layer loss, and attenuated apoptosis compared to those administered normoxic or hypoxic unconditioned medium. Hypoxic-preconditioned medium had 21 significantly increased protein levels compared to normoxic medium. Conclusions The medium from hypoxic-preconditioned BMSCs robustly restored retinal function and prevented cell loss after ischemia when injected 24 hours after ischemia. The protective effect was even more pronounced than in our previous studies of normoxic conditioned medium. Prosurvival signals triggered by the secretome may play a role in this neuroprotective effect. PMID:27367588

  1. Improved Algorithm of SCS-CN Model Parameters in Typical Inland River Basin in Central Asia

    NASA Astrophysics Data System (ADS)

    Wang, Jin J.; Ding, Jian L.; Zhang, Zhe; Chen, Wen Q.

    2017-02-01

    Rainfall-runoff relationship is the most important factor for hydrological structures, social and economic development on the background of global warmer, especially in arid regions. The aim of this paper is find the suitable method to simulate the runoff in arid area. The Soil Conservation Service Curve Number (SCS-CN) is the most popular and widely applied model for direct runoff estimation. In this paper, we will focus on Wen-quan Basin in source regions of Boertala River. It is a typical valley of inland in Central Asia. First time to use the 16m resolution remote sensing image about high-definition earth observation satellite “Gaofen-1” to provide a high degree accuracy data for land use classification determine the curve number. Use surface temperature/vegetation index (TS/VI) construct 2D scatter plot combine with the soil moisture absorption balance principle calculate the moisture-holding capacity of soil. Using original and parameter algorithm improved SCS-CN model respectively to simulation the runoff. The simulation results show that the improved model is better than original model. Both of them in calibration and validation periods Nash-Sutcliffe efficiency were 0.79, 0.71 and 0.66,038. And relative error were3%, 12% and 17%, 27%. It shows that the simulation accuracy should be further improved and using remote sensing information technology to improve the basic geographic data for the hydrological model has the following advantages: 1) Remote sensing data having a planar characteristic, comprehensive and representative. 2) To get around the bottleneck about lack of data, provide reference to simulation the runoff in similar basin conditions and data-lacking regions.

  2. Possible breakthrough: Significant improvement of signal to noise ratio by stochastic resonance

    NASA Astrophysics Data System (ADS)

    Kiss, L. B.

    1996-06-01

    The simplest stochastic resonator is used, a level crossing detector (LCD), to investigate key properties of stochastic resonance (SR). It is pointed out that successful signal processing and biological applications of SR require to work in the large signal limit (nonlinear transfer limit) which requires a completely new approach: wide band input signal and a new, generalised definition of output noise. The new way of approach is illustrated by a new arrangement. The arrangement employs a special LCD, white input noise and a special, large, subthreshold wide band signal. First time in the history of SR (for a wide band input noise), the signal to noise ratio becomes much higher at the output of a stochastic resonator than at its input. In that way, SR is proven to have a potential to improve signal transfer. Note, that the new arrangement seems to have resemblance to neurone models, therefore, it has a potential also for biological applications.

  3. An improved pulse sequence and inversion algorithm of T2 spectrum

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu

    2017-03-01

    The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.

  4. Delineating complex spatiotemporal distribution of earthquake aftershocks: an improved Source-Scanning Algorithm

    NASA Astrophysics Data System (ADS)

    Liao, Yen-Che; Kao, Honn; Rosenberger, Andreas; Hsu, Shu-Kun; Huang, Bor-Shouh

    2012-06-01

    Conventional earthquake location methods depend critically on the correct identification of seismic phases and their arrival times from seismograms. Accurate phase picking is particularly difficult for aftershocks that occur closely in time and space, mostly because of the ambiguity of correlating the same phase at different stations. In this study, we introduce an improved Source-Scanning Algorithm (ISSA) for the purpose of delineating the complex distribution of aftershocks without time-consuming and labour-intensive phase-picking procedures. The improvements include the application of a ground motion analyser to separate P and S waves, the automatic adjustment of time windows for 'brightness' calculation based on the scanning resolution and a modified brightness function to combine constraints from multiple phases. Synthetic experiments simulating a challenging scenario are conducted to demonstrate the robustness of the ISSA. The method is applied to a field data set selected from the ocean-bottom-seismograph records of an offshore aftershock sequence southwest of Taiwan. Although visual inspection of the seismograms is ambiguous, our ISSA analysis clearly delineates two events that can best explain the observed waveform pattern.

  5. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    PubMed

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  6. Feasibility of an automatic computer-assisted algorithm for the detection of significant coronary artery disease in patients presenting with acute chest pain.

    PubMed

    Kang, Ki-woon; Chang, Hyuk-jae; Shim, Hackjoon; Kim, Young-jin; Choi, Byoung-wook; Yang, Woo-in; Shim, Jee-young; Ha, Jongwon; Chung, Namsik

    2012-04-01

    Automatic computer-assisted detection (auto-CAD) of significant coronary artery disease (CAD) in coronary computed tomography angiography (cCTA) has been shown to have relatively high accuracy. However, to date, scarce data are available regarding the performance of auto-CAD in the setting of acute chest pain. This study sought to demonstrate the feasibility of an auto-CAD algorithm for cCTA in patients presenting with acute chest pain. We retrospectively investigated 398 consecutive patients (229 male, mean age 50±21 years) who had acute chest pain and underwent cCTA between Apr 2007 and Jan 2011 in the emergency department (ED). All cCTA data were analyzed using an auto-CAD algorithm for the detection of >50% CAD on cCTA. The accuracy of auto-CAD was compared with the formal radiology report. In 380 of 398 patients (18 were excluded due to failure of data processing), per-patient analysis of auto-CAD revealed the following: sensitivity 94%, specificity 63%, positive predictive value (PPV) 76%, and negative predictive value (NPV) 89%. After the exclusion of 37 cases that were interpreted as invalid by the auto-CAD algorithm, the NPV was further increased up to 97%, considering the false-negative cases in the formal radiology report, and was confirmed by subsequent invasive angiogram during the index visit. We successfully demonstrated the high accuracy of an auto-CAD algorithm, compared with the formal radiology report, for the detection of >50% CAD on cCTA in the setting of acute chest pain. The auto-CAD algorithm can be used to facilitate the decision-making process in the ED.

  7. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  8. The Doylestown Algorithm: A Test to Improve the Performance of AFP in the Detection of Hepatocellular Carcinoma.

    PubMed

    Wang, Mengjun; Devarajan, Karthik; Singal, Amit G; Marrero, Jorge A; Dai, Jianliang; Feng, Ziding; Rinaudo, Jo Ann S; Srivastava, Sudhir; Evans, Alison; Hann, Hie-Won; Lai, Yinzhi; Yang, Hushan; Block, Timothy M; Mehta, Anand

    2016-02-01

    Biomarkers for the early diagnosis of hepatocellular carcinoma (HCC) are needed to decrease mortality from this cancer. However, as new biomarkers have been slow to be brought to clinical practice, we have developed a diagnostic algorithm that utilizes commonly used clinical measurements in those at risk of developing HCC. Briefly, as α-fetoprotein (AFP) is routinely used, an algorithm that incorporated AFP values along with four other clinical factors was developed. Discovery analysis was performed on electronic data from patients who had liver disease (cirrhosis) alone or HCC in the background of cirrhosis. The discovery set consisted of 360 patients from two independent locations. A logistic regression algorithm was developed that incorporated log-transformed AFP values with age, gender, alkaline phosphatase, and alanine aminotransferase levels. We define this as the Doylestown algorithm. In the discovery set, the Doylestown algorithm improved the overall performance of AFP by 10%. In subsequent external validation in over 2,700 patients from three independent sites, the Doylestown algorithm improved detection of HCC as compared with AFP alone by 4% to 20%. In addition, at a fixed specificity of 95%, the Doylestown algorithm improved the detection of HCC as compared with AFP alone by 2% to 20%. In conclusion, the Doylestown algorithm consolidates clinical laboratory values, with age and gender, which are each individually associated with HCC risk, into a single value that can be used for HCC risk assessment. As such, it should be applicable and useful to the medical community that manages those at risk for developing HCC.

  9. Using the erroneous data clustering to improve the feature extraction weights of original image algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao

    2017-02-01

    Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network

  10. Improved Approximation Algorithms for Item Pricing with Bounded Degree and Valuation

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya

    When a store sells items to customers, the store wishes to decide the prices of the items to maximize its profit. If the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. It would be hard for the store to decide the prices of items. Assume that a store has a set V of n items and there is a set C of m customers who wish to buy those items. The goal of the store is to decide the price of each item to maximize its profit. We refer to this maximization problem as an item pricing problem. We classify the item pricing problems according to how many items the store can sell or how the customers valuate the items. If the store can sell every item i with unlimited (resp. limited) amount, we refer to this as unlimited supply (resp. limited supply). We say that the item pricing problem is single-minded if each customer j∈C wishes to buy a set ej⊆V of items and assigns valuation w(ej)≥0. For the single-minded item pricing problems (in unlimited supply), Balcan and Blum regarded them as weighted k-hypergraphs and gave several approximation algorithms. In this paper, we focus on the (pseudo) degree of k-hypergraphs and the valuation ratio, i. e., the ratio between the smallest and the largest valuations. Then for the single-minded item pricing problems (in unlimited supply), we show improved approximation algorithms (for k-hypergraphs, general graphs, bipartite graphs, etc.) with respect to the maximum (pseudo) degree and the valuation ratio.

  11. Deep sulcal landmarks: algorithmic and conceptual improvements in the definition and extraction of sulcal pits.

    PubMed

    Auzias, G; Brun, L; Deruelle, C; Coulon, O

    2015-05-01

    Recent interest has been growing concerning points of maximum depth within folds, the sulcal pits, that can be used as reliable cortical landmarks. These remarkable points on the cortical surface are defined algorithmically as the outcome of an automatic extraction procedure. The influence of several crucial parameters of the reference technique (Im et al., 2010) has not been evaluated extensively, and no optimization procedure has been proposed so far. Designing an appropriate optimization framework for these parameters is mandatory to guarantee the reproducibility of results across studies and to ensure the feasibility of sulcal pit extraction and analysis on large cohorts. In this work, we propose a framework specifically dedicated to the optimization of the parameters of the method. This optimization framework relies on new measures for better quantifying the reproducibility of the number of sulcal pits per region across individuals, in line with the assumptions of one-to-one correspondence of sulcal roots across individuals which is an explicit aspect of the sulcal roots model (Régis et al., 2005). Our procedure benefits from a combination of improvements, including the use of a convenient sulcal depth estimation and is methodologically sound. Our experiments on two different groups of individuals, with a total of 137 subjects, show an increased reliability across subjects in deeper sulcal pits, as compared to the previous approach, and cover the entire cortical surface, including shallower and more variable folds that were not considered before. The effectiveness of our method ensures the feasibility of a systematic study of sulcal pits on large cohorts. On top of these methodological advances, we quantify the relationship between the reproducibility of the number of sulcal pits per region across individuals and their respective depth and demonstrate the relatively high reproducibility of several pits corresponding to shallower folds. Finally, we report new

  12. Possible breakthrough: Significant improvement of signal to noise ratio by stochastic resonance

    SciTech Connect

    Kiss, L.B.

    1996-06-01

    The {ital simplest} {ital stochastic} {ital resonator} {ital is} {ital used}, {ital a} {ital level} {ital crossing} {ital detector} (LCD), to investigate key properties of stochastic resonance (SR). It is pointed out that successful signal processing and biological applications of SR require to work in the {ital large} {ital signal} {ital limit} (nonlinear transfer limit) which requires a completely new approach: {ital wide} {ital band} {ital input} {ital signal} and a {ital new}, {ital generalised} {ital definition} {ital of} {ital output} {ital noise}. The new way of approach is illustrated by a new arrangement. The arrangement employs a special LCD, white input noise and a special, large, subthreshold wide band signal. {ital First} {ital time} {ital in} {ital the} {ital history} {ital of} {ital SR} (for a wide band input noise), the {ital signal} {ital to} {ital noise} {ital ratio} {ital becomes} {ital much} {ital higher} {ital at} {ital the} {ital output} of a stochastic resonator than {ital at} {ital its} {ital input}. In that way, SR is proven to have a potential to improve signal transfer. Note, that the new arrangement seems to have resemblance to {ital neurone} {ital models}, therefore, it has a potential also for biological applications. {copyright} {ital 1996 American Institute of Physics.}

  13. Improved algorithms for parsing ESLTAGs: a grammatical model suitable for RNA pseudoknots.

    PubMed

    Rajasekaran, Sanguthevar; Al Seesi, Sahar; Ammar, Reda A

    2010-01-01

    Formal grammars have been employed in biology to solve various important problems. In particular, grammars have been used to model and predict RNA structures. Two such grammars are Simple Linear Tree Adjoining Grammars (SLTAGs) and Extended SLTAGs (ESLTAGs). Performances of techniques that employ grammatical formalisms critically depend on the efficiency of the underlying parsing algorithms. In this paper, we present efficient algorithms for parsing SLTAGs and ESLTAGs. Our algorithm for SLTAGs parsing takes O(min{m,n⁴}) time and O(min{m,n⁴}) space, where m is the number of entries that will ever be made in the matrix M (that is normally used by TAG parsing algorithms). Our algorithm for ESLTAGs parsing takes O(min{m,n⁴}) time and O(min{m,n⁴}) space. We show that these algorithms perform better, in practice, than the algorithms of Uemura et al.

  14. Significant improvement in Mn2O3 transition metal oxide electrical conductivity via high pressure

    NASA Astrophysics Data System (ADS)

    Hong, Fang; Yue, Binbin; Hirao, Naohisa; Liu, Zhenxian; Chen, Bin

    2017-03-01

    Highly efficient energy storage is in high demand for next-generation clean energy applications. As a promising energy storage material, the application of Mn2O3 is limited due to its poor electrical conductivity. Here, high-pressure techniques enhanced the electrical conductivity of Mn2O3 significantly. In situ synchrotron micro X-Ray diffraction, Raman spectroscopy and resistivity measurement revealed that resistivity decreased with pressure and dramatically dropped near the phase transition. At the highest pressure, resistivity reduced by five orders of magnitude and the sample showed metal-like behavior. More importantly, resistivity remained much lower than its original value, even when the pressure was fully released. This work provides a new method to enhance the electronic properties of Mn2O3 using high-pressure treatment, benefiting its applications in energy-related fields.

  15. Significant improvement in Mn2O3 transition metal oxide electrical conductivity via high pressure

    PubMed Central

    Hong, Fang; Yue, Binbin; Hirao, Naohisa; Liu, Zhenxian; Chen, Bin

    2017-01-01

    Highly efficient energy storage is in high demand for next-generation clean energy applications. As a promising energy storage material, the application of Mn2O3 is limited due to its poor electrical conductivity. Here, high-pressure techniques enhanced the electrical conductivity of Mn2O3 significantly. In situ synchrotron micro X-Ray diffraction, Raman spectroscopy and resistivity measurement revealed that resistivity decreased with pressure and dramatically dropped near the phase transition. At the highest pressure, resistivity reduced by five orders of magnitude and the sample showed metal-like behavior. More importantly, resistivity remained much lower than its original value, even when the pressure was fully released. This work provides a new method to enhance the electronic properties of Mn2O3 using high-pressure treatment, benefiting its applications in energy-related fields. PMID:28276479

  16. Significant improvement in Mn2O3 transition metal oxide electrical conductivity via high pressure.

    PubMed

    Hong, Fang; Yue, Binbin; Hirao, Naohisa; Liu, Zhenxian; Chen, Bin

    2017-03-09

    Highly efficient energy storage is in high demand for next-generation clean energy applications. As a promising energy storage material, the application of Mn2O3 is limited due to its poor electrical conductivity. Here, high-pressure techniques enhanced the electrical conductivity of Mn2O3 significantly. In situ synchrotron micro X-Ray diffraction, Raman spectroscopy and resistivity measurement revealed that resistivity decreased with pressure and dramatically dropped near the phase transition. At the highest pressure, resistivity reduced by five orders of magnitude and the sample showed metal-like behavior. More importantly, resistivity remained much lower than its original value, even when the pressure was fully released. This work provides a new method to enhance the electronic properties of Mn2O3 using high-pressure treatment, benefiting its applications in energy-related fields.

  17. Significant improvement of pig cloning efficiency by treatment with LBH589 after somatic cell nuclear transfer.

    PubMed

    Jin, Jun-Xue; Li, Suo; Gao, Qing-Shan; Hong, Yu; Jin, Long; Zhu, Hai-Ying; Yan, Chang-Guo; Kang, Jin-Dan; Yin, Xi-Jun

    2013-10-01

    The low success rate of animal cloning by somatic cell nuclear transfer (SCNT) associates with epigenetic aberrancy, including the abnormal acetylation of histones. Altering the epigenetic status by histone deacetylase inhibitors (HDACi) enhances the developmental potential of SCNT embryos. In the current study, we examined the effects of LBH589 (panobinostat), a novel broad-spectrum HDACi, on the nuclear reprogramming and development of pig SCNT embryos in vitro. In experiment 1, we compared the in vitro developmental competence of nuclear transfer embryos treated with different concentrations of LBH589. Embryos treated with 50 nM LBH589 for 24 hours showed a significant increase in the rate of blastocyst formation compared with the control or embryos treated with 5 or 500 nM LBH589 (32.4% vs. 11.8%, 12.1%, and 10.0%, respectively, P < 0.05). In experiment 2, we examined the in vitro developmental competence of nuclear transfer embryos treated with 50 nM LBH589 for various intervals after activation and 6-dimethylaminopurine. Embryos treated for 24 hours had higher rates of blastocyst formation than the other groups. In experiment 3, when the acetylation of H4K12 was examined in SCNT embryos treated for 6 hours with 50 nM LBH589 by immunohistochemistry, the staining intensities of these proteins in LBH589-treated SCNT embryos were significantly higher than in the control. In experiment 4, LBH589-treated nuclear transfer and control embryos were transferred into surrogate mothers, resulting in three (100%) and two (66.7%) pregnancies, respectively. In conclusion, LBH589 enhances the nuclear reprogramming and developmental potential of SCNT embryos by altering the epigenetic status and expression, and increasing blastocyst quality.

  18. [Significant improved anthocyanins biosynthesis in suspension cultures of Vitis vinifera by process intensification].

    PubMed

    Qu, Jun-Ge; Yu, Xing-Ju; Zhang, Wei; Jin, Mei-Fang

    2006-03-01

    The low-production is a ubiquitous problem and has prevented the commercialization of secondary metabolite production in plant cell culture. In order to examine the effective approaches to improvement of secondary metabolite production in plant cell culture, the investigation of anthocyanins accumulation in suspension cultures of Vitis vinifera, as a model system, had been initiated in our laboratory. In this present research, various elicitors and the precursor of phenylalanine were used in combination to enhance the anthocyanins production in suspension cultures of Vitis vinifera. And an integrated process with the combination of elicitation, precursor feeding and light irradiation was reported for rational bioprocess design. Among the combination treatment of phenylalanine feeding and several elicitors (methyl-beta-cyclodextrin, dextran T-40, methyl jasmonate, extracts of Aspergillus niger and Fusarium orthoceras), the combination with methyl jasmonate gave the highest anthocyanins production in suspension cultures of Vitis vinifera. When compared to the controls, the anthocyanins content (CV/g, FCW) and production (CV/L) increased by 2.7-fold and 3.4-fold, respectively. The optimum time for the addition of phenylalanine and methyl jasmonate was 4 days after inoculation. Two cell lines with different anthocyanins-producing capacity responded differently to the optimum combination treatment of 30 micromol/L phenylalanine feeding, 218 micromol/L methyl jasmonate elicitation and 3000 to approximately 4000 1x light illumination. The high-and low-anthocyanins-producing cell lines of VV05 and VV06 produced the maximum of 2975 and 4090 CV/L of anthocyanins that were 2.5- and 5.2-fold of the controls, respectively.

  19. Bayesian Species Identification under the Multispecies Coalescent Provides Significant Improvements to DNA Barcoding Analyses.

    PubMed

    Yang, Ziheng; Rannala, Bruce

    2017-03-09

    DNA barcoding methods use a single locus (usually the mitochondrial COI gene) to assign unidentified specimens to known species in a library based on a genetic distance threshold that distinguishes between-species divergence from within-species diversity. Recently developed species delimitation methods based on the multispecies coalescent (MSC) model offer an alternative approach to individual assignment using either single-locus or multi-loci sequence data. Here we use simulations to demonstrate three features of an MSC method implemented in the program bpp. First, we show that with one locus, MSC can accurately assign individuals to species without the need for arbitrarily determined distance thresholds (as required for barcoding methods). We provide an example in which no single threshold or barcoding gap exists that can be used to assign all specimens without incurring high error rates. Second, we show that bpp can identify cryptic species that may be mis-identified as a single species within the library, potentially improving the accuracy of barcoding libraries. Third, we show that taxon rarity does not present any particular problems for species assignments using bpp, and that accurate assignments can be achieved even when only one or a few loci are available. Thus, concerns that have been raised that MSC methods may have problems analyzing rare taxa (singletons) are unfounded. Currently barcoding methods enjoy a huge computational advantage over MSC methods and may be the only approach feasible for massively large datasets, but MSC methods may offer a more stringent test for species that are tentatively assigned by barcoding. This article is protected by copyright. All rights reserved.

  20. The Strasbourg Large Refractor and Dome: Significant Improvements and Failed Attempts

    NASA Astrophysics Data System (ADS)

    Heck, Andre

    2009-01-01

    Founded by the German Empire in the late 19th century, Strasbourg Astronomical Observatory featured several novelties from the start. According to Mueller (1978), the separation of observing buildings from the study area and from the astronomers' residence was a revolution in observatory construction. The instruments were, as much as possible, isolated from the vibrations of the buildings themselves. "Gas flames" and water were used to reduce temperature effects. Thus the Large Dome (ca 11m diameter), housing the Large Refractor (ca 49cm, then the largest in Germany) and covered by zinc over wood, could be cooled down by water running from the top. Reports (including by the French who took over the observatory after World War I) are however somehow nonexistent on the effective usage and actual efficiency of such a system (which must have generated locally a significant amount of humidity). The paper will detail these technical attempts as well as the specificities of the instruments installed in that new observatory intended as a showcase of German astronomy.

  1. Validation of an improved 'diffeomorphic demons' algorithm for deformable image registration in image-guided radiation therapy.

    PubMed

    Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao

    2014-01-01

    Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.

  2. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    PubMed

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.

  3. A New Algorithm Using the Non-dominated Tree to improve Non-dominated Sorting.

    PubMed

    Gustavsson, Patrik; Syberfeldt, Anna

    2017-01-19

    Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows. This paper presents a new, more efficient, algorithm called the Efficient Non-dominated Sort with Non-Dominated Tree (ENS-NDT). ENS-NDT is an extension of the ENS-BS algorithm and uses a novel Non-Dominated Tree (NDTree) to speed up the non-dominated sorting. ENS-NDT is able to handle large population sizes and a large number of objectives more efficiently than existing algorithms for non-dominated sorting. In the paper, it is shown that with ENS-NDT the runtime of multi-objective optimization algorithms such as the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) can be substantially reduced.

  4. Optimization of frequency lowering algorithms for getting the highest speech intelligibility improvement by hearing loss simulation.

    PubMed

    Arıöz, Umut; Günel, Banu

    2015-06-01

    High frequency hearing loss is a growing problem for both children and adults. To overcome this impairment, different frequency lowering methods (FLMs) were tried from 1930s, however no satisfaction was provided up to now. In this study, for getting higher speech intelligibility, eight combinations of FLMs which were designed originally were tried with simulated sounds onto normal hearing subjects. These improvements were calculated by the difference with standard hearing aid method, amplification. High frequency hearing loss was simulated with the combined suprathreshold effects. An offline study was carried out for each subject for determining the significant methods used in modified rhyme test (MRT) (Subjective measure for intelligibility). Significant methods were determined according to their speech intelligibility index (SII) (Objective measure for intelligibility). All different cases were tried under four noisy environments and a noise free environment. Twelve hearing impaired subjects were simulated by hearing loss simulation (HLS). MRT was developed for Turkish language as a first time. As the results of improvements, total 71 cases were statistically significant for twelve subjects. Eighty-three percent success of FLMs was achieved against amplification for being an alternative method of amplification in noisy environments. For four subjects, all significant methods gave higher improvements than amplification. As conclusion, specific method recommendations for different noisy environments were done for each subject for getting more speech intelligibility.

  5. Significant improvements to the GBT surface accuracy via high-resolution radio holography

    NASA Astrophysics Data System (ADS)

    Hunter, Todd R.; Schwab, Fred R.; White, Steve D.; Ford, John M.; Ghigo, Frank D.; Maddalena, Ron J.; Mason, Brian S.; Nelson, J. D.; Ray, Jason; Simon, Bob

    2010-01-01

    The 100-m diameter Green Bank Telescope (GBT) was built with an active surface of 2209 actuators in order to achieve and maintain an accurate paraboloidal shape. The actuator home positions were set originally via photogrammetry performed 10 years ago, which resulted in a surface accuracy of about 400 microns rms. In order to improve this performance, in late Fall 2008 we installed a Ku-band holography system on the telescope, composed of two external-reference low-noise block converters attached to linearly-polarized feeds, and followed by down conversion stages, anti-aliasing filters, and a digital correlator. The primary receiver illuminates the subreflector from the standard Gregorian focus while the reference receiver is coupled to an upward-looking 30cm diameter feed located at the tip of the vertical feed arm (above the subreflector). The system is tunable over the typical geostationary satellite downlink frequency band (11.7-12.2 GHz) and the correlated bandwidth is 10 kHz. We performed a spectral survey of a few dozen satellites visible from Green Bank, and identified a number of strong and stable continuous wave beacons near 11.700 GHz suitable for holographic mapping. The typical phase stability of the system is 2 degrees rms in 36 millisecond integrations, and is mostly limited by atmosphere. We began the holography campaign in January 2009. Maps are made with on-the-fly raster scanning over a 2 degree region with 1400 points in the scan direction and 201 points in the perpendicular direction, requiring approximately 3 hours. Surface features as small as 0.5m are visible, compared to the typical panel size of 2m by 2.5m. A number of large features coincident with specific actuators were identified, and traced to electrical problems either with the actuator motors, position sensors or cabling. These problems were repaired during the following months as the campaign continued through several iterations of holography mapping, surface adjustments, and

  6. An improved algorithm for automatic detection of saccades in eye movement data and for calculating saccade parameters.

    PubMed

    Behrens, F; Mackeben, M; Schröder-Preikschat, W

    2010-08-01

    This analysis of time series of eye movements is a saccade-detection algorithm that is based on an earlier algorithm. It achieves substantial improvements by using an adaptive-threshold model instead of fixed thresholds and using the eye-movement acceleration signal. This has four advantages: (1) Adaptive thresholds are calculated automatically from the preceding acceleration data for detecting the beginning of a saccade, and thresholds are modified during the saccade. (2) The monotonicity of the position signal during the saccade, together with the acceleration with respect to the thresholds, is used to reliably determine the end of the saccade. (3) This allows differentiation between saccades following the main-sequence and non-main-sequence saccades. (4) Artifacts of various kinds can be detected and eliminated. The algorithm is demonstrated by applying it to human eye movement data (obtained by EOG) recorded during driving a car. A second demonstration of the algorithm detects microsleep episodes in eye movement data.

  7. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    PubMed

    Li, Yang; Li, Guoqing; Wang, Zhenhao

    2015-01-01

    In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  8. Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Chunhua; Wang, Jiang; Yi, Guosheng

    2017-03-01

    Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.

  9. Development of an algorithm to improve the accuracy of dose delivery in Gamma Knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Cernica, George Dumitru

    2007-12-01

    Gamma Knife stereotactic radiosurgery has demonstrated decades of successful treatments. Despite its high spatial accuracy, the Gamma Knife's planning software, GammaPlan, uses a simple exponential as the TPR curve for all four collimator sizes, and a skull scaling device to acquire ruler measurements to interpolate a threedimensional spline to model the patient's skull. The consequences of these approximations have not been previously investigated. The true TPR curves of the four collimators were measured by blocking 200 of the 201 sources with steel plugs. Additional attenuation was provided through the use of a 16 cm tungsten sphere, designed to enable beamlet measurements along one axis. TPR, PDD, and beamlet profiles were obtained using both an ion chamber and GafChromic EBT film for all collimators. Additionally, an in-house planning algorithm able to calculate the contour of the skull directly from an image set and implement the measured beamlet data in shot time calculations was developed. Clinical and theoretical Gamma Knife cases were imported into our algorithm. The TPR curves showed small deviations from a simple exponential curve, with average discrepancies under 1%, but with a maximum discrepancy of 2% found for the 18 mm collimator beamlet at shallow depths. The consequences on the PDD of the of the beamlets were slight, with a maximum of 1.6% found with the 18 mm collimator beamlet. Beamlet profiles of the 4 mm, 8 mm, and 14 mm showed some underestimates of the off-axis ratio near the shoulders (up to 10%). The toes of the profiles were underestimated for all collimators, with differences up to 7%. Shot times were affected by up to 1.6% due to TPR differences, but clinical cases showed deviations by no more than 0.5%. The beamlet profiles affected the dose calculations more significantly, with shot time calculations differing by as much as 0.8%. The skull scaling affected the shot time calculations the most significantly, with differences of up to 5

  10. Evaluating some computer exhancement algorithms that improve the visibility of cometary morphology

    NASA Technical Reports Server (NTRS)

    Larson, Stephen M.; Slaughter, Charles D.

    1992-01-01

    Digital enhancement of cometary images is a necessary tool in studying cometary morphology. Many image processing algorithms, some developed specifically for comets, have been used to enhance the subtle, low contrast coma and tail features. We compare some of the most commonly used algorithms on two different images to evaluate their strong and weak points, and conclude that there currently exists no single 'ideal' algorithm, although the radial gradient spatial filter gives the best overall result. This comparison should aid users in selecting the best algorithm to enhance particular features of interest.

  11. A neural-network-based exponential H∞ synchronisation for chaotic secure communication via improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Hsiao, Feng-Hsiag

    2016-10-01

    In this study, a novel approach via improved genetic algorithm (IGA)-based fuzzy observer is proposed to realise exponential optimal H∞ synchronisation and secure communication in multiple time-delay chaotic (MTDC) systems. First, an original message is inserted into the MTDC system. Then, a neural-network (NN) model is employed to approximate the MTDC system. Next, a linear differential inclusion (LDI) state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion derived in terms of Lyapunov's direct method, thus ensuring that the trajectories of the slave system approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI). Due to GA's random global optimisation search capabilities, the lower and upper bounds of the search space can be set so that the GA will seek better fuzzy observer feedback gains, accelerating feedback gain-based synchronisation via the LMI-based approach. IGA, which exhibits better performance than traditional GA, is used to synthesise a fuzzy observer to not only realise the exponential synchronisation, but also achieve optimal H∞ performance by minimizing the disturbance attenuation level and recovering the transmitted message. Finally, a numerical example with simulations is given in order to demonstrate the effectiveness of our approach.

  12. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

    PubMed

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

  13. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559

  14. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates.

    PubMed

    Brassey, Charlotte A; Gardiner, James D

    2015-08-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new 'alpha shapes' (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r (2)=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons.

  15. An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph

    PubMed Central

    Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe

    2017-01-01

    An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method. PMID:28335570

  16. Improving chemical mapping algorithm and visualization in full-field hard x-ray spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong

    2013-12-01

    X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.

  17. Exploiting a geometrically sampled grid in the steered response power algorithm for localization improvement.

    PubMed

    Salvati, D; Drioli, C; Foresti, G L

    2017-01-01

    The steered response power phase transform (SRP-PHAT) is a beamformer method very attractive in acoustic localization applications due to its robustness in reverberant environments. This paper presents a spatial grid design procedure, called the geometrically sampled grid (GSG), which aims at computing the spatial grid by taking into account the discrete sampling of time difference of arrival (TDOA) functions and the desired spatial resolution. A SRP-PHAT localization algorithm based on the GSG method is also introduced. The proposed method exploits the intersections of the discrete hyperboloids representing the TDOA information domain of the sensor array, and projects the whole TDOA information on the space search grid. The GSG method thus allows one to design the sampled spatial grid which represents the best search grid for a given sensor array, it allows one to perform a sensitivity analysis of the array and to characterize its spatial localization accuracy, and it may assist the system designer in the reconfiguration of the array. Experimental results using both simulated data and real recordings show that the localization accuracy is substantially improved both for high and for low spatial resolution, and that it is closely related to the proposed power response sensitivity measure.

  18. A procedure for the reliability improvement of the oblique ionograms automatic scaling algorithm

    NASA Astrophysics Data System (ADS)

    Ippolito, Alessandro; Scotto, Carlo; Sabbagh, Dario; Sgrigna, Vittorio; Maher, Phillip

    2016-05-01

    A procedure made by the combined use of the Oblique Ionogram Automatic Scaling Algorithm (OIASA) and Autoscala program is presented. Using Martyn's equivalent path theorem, 384 oblique soundings from a high-quality data set have been converted into vertical ionograms and analyzed by Autoscala program. The ionograms pertain to the radio link between Curtin W.A. (CUR) and Alice Springs N.T. (MTE), Australia, geographical coordinates (17.60°S; 123.82°E) and (23.52°S; 133.68°E), respectively. The critical frequency foF2 values extracted from the converted vertical ionograms by Autoscala were then compared with the foF2 values derived from the maximum usable frequencies (MUFs) provided by OIASA. A quality factor Q for the MUF values autoscaled by OIASA has been identified. Q represents the difference between the foF2 value scaled by Autoscala from the converted vertical ionogram and the foF2 value obtained applying the secant law to the MUF provided by OIASA. Using the receiver operating characteristic curve, an appropriate threshold level Qt was chosen for Q to improve the performance of OIASA.

  19. An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph.

    PubMed

    Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe

    2017-03-21

    An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method.

  20. An improved fixed phased demodulation method combined with phase generated carrier (PGC) and ellipse fitting algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Feng; Hou, Lu; Yang, Jun; Yuan, Yonggui; Li, Chuang; Yan, Dekai; Yuan, Libo; Zheng, Hui; Chang, Zheng; Ma, Kun; Yang, Jiyong

    2015-08-01

    In this paper, we present an improved fixed phased demodulation method combined with phase generated carrier (PGC) and e