Science.gov

Sample records for algorithm significantly improves

  1. A Matter of Timing: Identifying Significant Multi-Dose Radiotherapy Improvements by Numerical Simulation and Genetic Algorithm Search

    PubMed Central

    Angus, Simon D.; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost

  2. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  3. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  4. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  5. Algorithms for improved performance in cryptographic protocols.

    SciTech Connect

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  6. Algorithm for Detecting Significant Locations from Raw GPS Data

    NASA Astrophysics Data System (ADS)

    Kami, Nobuharu; Enomoto, Nobuyuki; Baba, Teruyuki; Yoshikawa, Takashi

    We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Assuming that a location is significant if users spend a certain time around that area, most current algorithms compare spatial/temporal variables, such as stay duration and a roaming diameter, with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, for N data points, they are generally O(N 2) algorithms since distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality sensitive hashing. Evaluations show competitive performance in detecting significant locations even under high noise levels.

  7. Improved Heat-Stress Algorithm

    NASA Technical Reports Server (NTRS)

    Teets, Edward H., Jr.; Fehn, Steven

    2007-01-01

    NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.

  8. Improved Global Ocean Color Using Polymer Algorithm

    NASA Astrophysics Data System (ADS)

    Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques

    2010-12-01

    A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.

  9. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  10. Improved bat algorithm applied to multilevel image thresholding.

    PubMed

    Alihodzic, Adis; Tuba, Milan

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  11. Improved MFCC algorithm in speaker recognition system

    NASA Astrophysics Data System (ADS)

    Shi, Yibo; Wang, Li

    2011-10-01

    In speaker recognition systems, one of the key feature parameters is MFCC, which can be used for speaker recognition. So, how to extract MFCC parameter in speech signals more exactly and efficiently, decides the performance of the system. Theoretically, MFCC parameters are used to describe the spectrum envelope of the vocal tract characteristics and often ignore the impacts of fundamental frequency. But in practice, MFCC can be influenced by fundamental frequency which can cause palpable performance reduction. So, smoothing MFCC (SMFCC), which based on smoothing short-term spectral amplitude envelope, has been proposed to improve MFCC algorithm. Experimental results show that improved MFCC parameters---SMFCC can degrade the bad influences of fundamental frequency effectively and upgrade the performances of speaker recognition system. Especially for female speakers, who have higher fundamental frequency, the recognition rate improves more significantly.

  12. Improved piecewise orthogonal signal correction algorithm.

    PubMed

    Feudale, Robert N; Tan, Huwei; Brown, Steven D

    2003-10-01

    Piecewise orthogonal signal correction (POSC), an algorithm that performs local orthogonal filtering, was recently developed to process spectral signals. POSC was shown to improve partial leastsquares regression models over models built with conventional OSC. However, rank deficiencies within the POSC algorithm lead to artifacts in the filtered spectra when removing two or more POSC components. Thus, an updated OSC algorithm for use with the piecewise procedure is reported. It will be demonstrated how the mathematics of this updated OSC algorithm were derived from the previous version and why some OSC versions may not be as appropriate to use with the piecewise modeling procedure as the algorithm reported here. PMID:14639746

  13. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  14. An improved Camshift algorithm for target recognition

    NASA Astrophysics Data System (ADS)

    Fu, Min; Cai, Chao; Mao, Yusu

    2015-12-01

    Camshift algorithm and three frame difference algorithm are the popular target recognition and tracking methods. Camshift algorithm requires a manual initialization of the search window, which needs the subjective error and coherence, and only in the initialization calculating a color histogram, so the color probability model cannot be updated continuously. On the other hand, three frame difference method does not require manual initialization search window, it can make full use of the motion information of the target only to determine the range of motion. But it is unable to determine the contours of the object, and can not make use of the color information of the target object. Therefore, the improved Camshift algorithm is proposed to overcome the disadvantages of the original algorithm, the three frame difference operation is combined with the object's motion information and color information to identify the target object. The improved Camshift algorithm is realized and shows better performance in the recognition and tracking of the target.

  15. Discovering simple DNA sequences by the algorithmic significance method.

    PubMed

    Milosavljević, A; Jurka, J

    1993-08-01

    A new method, 'algorithmic significance', is proposed as a tool for discovery of patterns in DNA sequences. The main idea is that patterns can be discovered by finding ways to encode the observed data concisely. In this sense, the method can be viewed as a formal version of the Occam's Razor principle. In this paper the method is applied to discover significantly simple DNA sequences. We define DNA sequences to be simple if they contain repeated occurrences of certain 'words' and thus can be encoded in a small number of bits. Such definition includes minisatellites and microsatellites. A standard dynamic programming algorithm for data compression is applied to compute the minimal encoding lengths of sequences in linear time. An electronic mail server for identification of simple sequences based on the proposed method has been installed at the Internet address pythia/anl.gov. PMID:8402207

  16. Improved local linearization algorithm for solving the quaternion equations

    NASA Technical Reports Server (NTRS)

    Yen, K.; Cook, G.

    1980-01-01

    The objective of this paper is to develop a new and more accurate local linearization algorithm for numerically solving sets of linear time-varying differential equations. Of special interest is the application of this algorithm to the quaternion rate equations. The results are compared, both analytically and experimentally, with previous results using local linearization methods. The new algorithm requires approximately one-third more calculations per step than the previously developed local linearization algorithm; however, this disadvantage could be reduced by using parallel implementation. For some cases the new algorithm yields significant improvement in accuracy, even with an enlarged sampling interval. The reverse is true in other cases. The errors depend on the values of angular velocity, angular acceleration, and integration step size. One important result is that for the worst case the new algorithm can guarantee eigenvalues nearer the region of stability than can the previously developed algorithm.

  17. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  18. Improving the algorithm of temporal relation propagation

    NASA Astrophysics Data System (ADS)

    Shen, Jifeng; Xu, Dan; Liu, Tongming

    2005-03-01

    In the military Multi Agent System, every agent needs to analyze the temporal relationships among the tasks or combat behaviors, and it"s very important to reflect the battlefield situation in time. The temporal relation among agents is usually very complex, and we model it with interval algebra (IA) network. Therefore an efficient temporal reasoning algorithm is vital in battle MAS model. The core of temporal reasoning is path consistency algorithm, an efficient path consistency algorithm is necessary. In this paper we used the Interval Matrix Calculus (IMC) method to represent the temporal relation, and optimized the path consistency algorithm by improving the efficiency of propagation of temporal relation based on the Allen's path consistency algorithm.

  19. Recent BRCAPRO upgrades significantly improve calibration

    PubMed Central

    Mazzola, Emanuele; Chipman, Jonathan; Cheng, Su-Chun; Parmigiani, Giovanni

    2014-01-01

    The recent release of version 2.0-8 of the BayesMendel package contains an updated BRCAPRO risk prediction model, which includes revised modeling of Contralateral Breast Cancer (CBC) penetrance, provisions for pedigrees of mixed ethnicity and an adjustment for mastectomies among family members. We estimated penetrance functions for contralateral breast cancer by a combination of parametric survival modeling of literature data and deconvolution of SEER9 data. We then validated the resulting updated model of CBC in BRCAPRO by comparing it with the previous release (BayesMendel 2.0-7), using pedigrees from the Cancer Genetics Network (CGN) Model Validation Study. Version 2.0-8 of BRCAPRO discriminates BRCA1/BRCA2 carriers from non-carriers with similar accuracy compared to the previous version (increase in AUC: 0.0043), is slightly more precise in terms of RMSE (decrease in RMSE: 0.0108), and it significantly improves calibration (ratio of observed to expected events of 0.9765 in version 2.0-8, compared to 0.8910 in version 2.0-7). We recommend that the new version be used in clinical counseling, particularly in settings where families with CBC are common. PMID:24891549

  20. Improving Search Algorithms by Using Intelligent Coordinates

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  1. An on-line template improvement algorithm

    NASA Astrophysics Data System (ADS)

    Yin, Yilong; Zhao, Bo; Yang, Xiukun

    2005-03-01

    In automatic fingerprint identification system, incomplete or rigid template may lead to false rejection and false matching. So, how to improve quality of the template, which is called template improvement, is important to automatic fingerprint identify system. In this paper, we propose a template improve algorithm. Based on the case-based method of machine learning and probability theory, we improve the template by deleting pseudo minutia, restoring lost genuine minutia and updating the information of minutia such as positions and directions. And special fingerprint image database is built for this work. Experimental results on this database indicate that our method is effective and quality of fingerprint template is improved evidently. Accordingly, performance of fingerprint matching is also improved stably along with the increase of using time.

  2. Algorithms for Detecting Significantly Mutated Pathways in Cancer

    NASA Astrophysics Data System (ADS)

    Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.

    Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We

  3. Unsteady transonic algorithm improvements for realistic aircraft applications

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1987-01-01

    Improvements to a time-accurate approximate factorization (AF) algorithm were implemented for steady and unsteady transonic analysis of realistic aircraft configurations. These algorithm improvements were made to the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code developed at the Langley Research Center. The code permits the aeroelastic analysis of complete aircraft in the flutter critical transonic speed range. The AF algorithm of the CAP-TSD code solves the unsteady transonic small-disturbance equation. The algorithm improvements include: an Engquist-Osher (E-O) type-dependent switch to more accurately and efficiently treat regions of supersonic flow; extension of the E-O switch for second-order spatial accuracy in these regions; nonreflecting far field boundary conditions for more accurate unsteady applications; and several modifications which accelerate convergence to steady-state. Calculations are presented for several configurations including the General Dynamics one-ninth scale F-16C aircraft model to evaluate the algorithm modifications. The modifications have significantly improved the stability of the AF algorithm and hence the reliability of the CAP-TSD code in general.

  4. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  5. An improved algorithm for wildfire detection

    NASA Astrophysics Data System (ADS)

    Nakau, K.

    2010-12-01

    Satellite information of wild fire location has strong demands from society. Therefore, Understanding such demands is quite important to consider what to improve the wild fire detection algorithm. Interviews and considerations imply that the most important improvements are geographical resolution of the wildfire product and classification of fire; smoldering or flaming. Discussion with fire service agencies are performed with fire service agencies in Alaska and fire service volunteer groups in Indonesia. Alaska Fire Service (AFS) makes 3D-map overlaid by fire location every morning. Then, this 3D-map is examined by leaders of fire service teams to decide their strategy to fighting against wild fire. Especially, firefighters of both agencies seek the best walk path to approach the fire. Because of mountainous landscape, geospatial resolution is quite important for them. For example, walking in bush for 1km, as same as one pixel of fire product, is very tough for firefighters. Also, in case of remote wild fire, fire service agencies utilize satellite information to decide when to have a flight observation to confirm the status; expanding, flaming, smoldering or out. Therefore, it is also quite important to provide the classification of fire; flaming or smoldering. Not only the aspect of disaster management, wildfire emits huge amount of carbon into atmosphere as much as one quarter to one half of CO2 by fuel combustion (IPCC AR4). Reduction of the CO2 emission by human caused wildfire is important. To estimate carbon emission from wildfire, special resolution is quite important. To improve sensitivity of wild fire detection, author adopts radiance based wildfire detection. Different from the existing brightness temperature approach, we can easily consider reflectance of background land coverage. Especially for GCOM-C1/SGLI, band to detect fire with 250m resolution is 1.6μm wavelength. In this band, we have much more sunlight reflection. Therefore, we need to

  6. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    Thompson, Robert E.

    2001-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Sounding." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  7. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    McHugh, Martin J.; Gordley, Larry L.; Russell, James M., III; Hervig, Mark E.

    1999-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Soundings." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first-year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multi-channel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  8. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    Thompson, Robert Earl; McHugh, Martin J.; Gordley, Larry L.; Hervig, Mark E.; Russell, James M., III; Douglass, Anne (Technical Monitor)

    2001-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth Upper Atmospheric Research Satellite (UARS) Science Investigator Program entitled 'HALOE Algorithm Improvements for Upper Tropospheric Sounding.' The goal of this effort is to develop and implement major inversion and processing improvements that will extend Halogen Occultation Experiment (HALOE) measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  9. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    algorithms by selecting ranges of the argument omega in which the performance is the fastest. Reasons for the new version: Some of the theoretical models describing electron transport in condensed matter need a source of the Chandrasekhar H function values with an accuracy of at least 10 decimal places. Additionally, calculations of this function should be as fast as possible since frequent calls to a subroutine providing this function are made (e.g., numerical evaluation of a double integral with a complicated integrand containing the H function). Both conditions were satisfied in the algorithm previously published [1]. However, it has been found that a proper selection of the quadrature in an integral representation of the Chandrasekhar function may considerably decrease the running time. By suitable selection of the number of abscissas in Gauss-Legendre quadrature, the execution time was decreased by a factor of more than 20. Simultaneously, the accuracy of results has not been affected. Summary of revisions: (1) As in previous work [1], two integral representations of the Chandrasekhar function, H(x,omega), were considered: the expression published by Dudarev and Whelan [2] and the expression published by Davidović et al. [3]. The algorithms implementing these representations were designated A and B, respectively. All integrals in these implementations were previously calculated using Romberg quadrature. It has been found, however, that the use of Gauss-Legendre quadrature considerably improved the performance of both algorithms. Two conditions have to be satisfied. (i) The number of abscissas, N, has to be rather large, and (ii) the abscissas and corresponding weights should be determined with accuracy as high as possible. The abscissas and weights are available for N=16, 20, 24, 32, 40, 48, 64, 80, and 96 with accuracy of 20 decimal places [4], and all these values were introduced into a new procedure GAUSS replacing procedure ROMBERG. Due to the fact that the

  10. Improved Algorithms Speed It Up for Codes

    SciTech Connect

    Hazi, A

    2005-09-20

    Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leader for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.

  11. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  12. CSA: An efficient algorithm to improve circular DNA multiple alignment

    PubMed Central

    Fernandes, Francisco; Pereira, Luísa; Freitas, Ana T

    2009-01-01

    Background The comparison of homologous sequences from different species is an essential approach to reconstruct the evolutionary history of species and of the genes they harbour in their genomes. Several complete mitochondrial and nuclear genomes are now available, increasing the importance of using multiple sequence alignment algorithms in comparative genomics. MtDNA has long been used in phylogenetic analysis and errors in the alignments can lead to errors in the interpretation of evolutionary information. Although a large number of multiple sequence alignment algorithms have been proposed to date, they all deal with linear DNA and cannot handle directly circular DNA. Researchers interested in aligning circular DNA sequences must first rotate them to the "right" place using an essentially manual process, before they can use multiple sequence alignment tools. Results In this paper we propose an efficient algorithm that identifies the most interesting region to cut circular genomes in order to improve phylogenetic analysis when using standard multiple sequence alignment algorithms. This algorithm identifies the largest chain of non-repeated longest subsequences common to a set of circular mitochondrial DNA sequences. All the sequences are then rotated and made linear for multiple alignment purposes. To evaluate the effectiveness of this new tool, three different sets of mitochondrial DNA sequences were considered. Other tests considering randomly rotated sequences were also performed. The software package Arlequin was used to evaluate the standard genetic measures of the alignments obtained with and without the use of the CSA algorithm with two well known multiple alignment algorithms, the CLUSTALW and the MAVID tools, and also the visualization tool SinicView. Conclusion The results show that a circularization and rotation pre-processing step significantly improves the efficiency of public available multiple sequence alignment algorithms when used in the

  13. High-speed scanning: an improved algorithm

    NASA Astrophysics Data System (ADS)

    Nachimuthu, A.; Hoang, Khoi

    1995-10-01

    In using machine vision for assessing an object's surface quality, many images are required to be processed in order to separate the good areas from the defective ones. Examples can be found in the leather hide grading process; in the inspection of garments/canvas on the production line; in the nesting of irregular shapes into a given surface... . The most common method of subtracting the total area from the sum of defective areas does not give an acceptable indication of how much of the `good' area can be used, particularly if the findings are to be used for the nesting of irregular shapes. This paper presents an image scanning technique which enables the estimation of useable areas within an inspected surface in terms of the user's definition, not the supplier's claims. That is, how much useable area the user can use, not the total good area as the supplier estimated. An important application of the developed technique is in the leather industry where the tanner (the supplier) and the footwear manufacturer (the user) are constantly locked in argument due to disputed quality standards of finished leather hide, which disrupts production schedules and wasted costs in re-grading, re- sorting... . The developed basic algorithm for area scanning of a digital image will be presented. The implementation of an improved scanning algorithm will be discussed in detail. The improved features include Boolean OR operations and many other innovative functions which aim at optimizing the scanning process in terms of computing time and the accurate estimation of useable areas.

  14. Improvements to the stand and hit algorithm

    SciTech Connect

    Boneh, A.; Boneh, S.; Caron, R.; Jibrin, S.

    1994-12-31

    The stand and hit algorithm is a probabilistic algorithm for detecting necessary constraints. The algorithm stands at a point in the feasible region and hits constraints by moving towards the boundary along randomly generated directions. In this talk we discuss methods for choosing the standing point. As well, we present the undetected first rule for determining the hit constraints.

  15. Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula

    2012-01-01

    AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,

  16. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  17. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  18. A multistrategy optimization improved artificial bee colony algorithm.

    PubMed

    Liu, Wen

    2014-01-01

    Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924

  19. RSA cipher algorithm improvements and VC programming realization

    NASA Astrophysics Data System (ADS)

    Wei, Xianmin

    2011-10-01

    This paper discusses the RSA algorithm basic mathematical principle, on the basis to propose a faster design improvement. Programming with Visual C proved that the operation speed of improved RSA algorithm is greatly faster than the speed without improvement. However, the security of anti-crack ability has not been adversely affected.

  20. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  1. Improved Algorithm For Finite-Field Normal-Basis Multipliers

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1989-01-01

    Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.

  2. Recent ATR and fusion algorithm improvements for multiband sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernández, Manuel

    2009-05-01

    An improved automatic target recognition processing string has been developed. The overall processing string consists of pre-processing, subimage adaptive clutter filtering, normalization, detection, data regularization, feature extraction, optimal subset feature selection, feature orthogonalization and classification processing blocks. The objects that are classified by the 3 distinct ATR strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution three-frequency band sonar imagery. The ATR processing strings were individually tuned to the corresponding three-frequency band data, making use of the new processing improvement, data regularization; this improvement entails computing the input data mean, clipping the data to a multiple of its mean and scaling it, prior to feature extraction and resulted in a 3:1 reduction in false alarms. Two significant fusion algorithm improvements were made. First, a nonlinear exponential Box-Cox expansion (consisting of raising data to a to-be-determined power) feature LLRT fusion algorithm was developed. Second, a repeated application of a subset Box-Cox feature selection / feature orthogonalization / LLRT fusion block was utilized. It was shown that cascaded Box-Cox feature LLRT fusion of the ATR processing strings outperforms baseline "summing" and single-stage Box-Cox feature LLRT algorithms, yielding significant improvements over the best single ATR processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate.

  3. An improved SIFT algorithm based on KFDA in image registration

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Yang, Lijuan; Huo, Jinfeng

    2016-03-01

    As a kind of stable feature matching algorithm, SIFT has been widely used in many fields. In order to further improve the robustness of the SIFT algorithm, an improved SIFT algorithm with Kernel Discriminant Analysis (KFDA-SIFT) is presented for image registration. The algorithm uses KFDA to SIFT descriptors for feature extraction matrix, and uses the new descriptors to conduct the feature matching, finally chooses RANSAC to deal with the matches for further purification. The experiments show that the presented algorithm is robust to image changes in scale, illumination, perspective, expression and tiny pose with higher matching accuracy.

  4. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm. PMID:27610308

  5. Improved branch-cut method algorithm applied in phase unwrapping

    NASA Astrophysics Data System (ADS)

    Hu, Jiayuan; Zhang, Yu; Wu, Jianle; Li, Jinlong; Wang, Haiqing

    2015-12-01

    Phase unwrapping is a common problem in many phase measuring techniques. Glodstein's branch-cut algorithm is one of classic ways of phase unwrapping, but it need rectifying. First the paper introduces the characteristics of residual points and describes Glodstein's branch-cut algorithm in detail. Then the paper discusses the improvements on the algorithm by changing branch setting and adding pretreatment. Last the paper summarizes the new algorithm and gets the better result by using computer emulation mode and validation test.

  6. Two Improved Algorithms for Envelope and Wavefront Reduction

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1997-01-01

    Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.

  7. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  8. Community detection based on modularity and an improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shang, Ronghua; Bai, Jing; Jiao, Licheng; Jin, Chao

    2013-03-01

    Complex networks are widely applied in every aspect of human society, and community detection is a research hotspot in complex networks. Many algorithms use modularity as the objective function, which can simplify the algorithm. In this paper, a community detection method based on modularity and an improved genetic algorithm (MIGA) is put forward. MIGA takes the modularity Q as the objective function, which can simplify the algorithm, and uses prior information (the number of community structures), which makes the algorithm more targeted and improves the stability and accuracy of community detection. Meanwhile, MIGA takes the simulated annealing method as the local search method, which can improve the ability of local search by adjusting the parameters. Compared with the state-of-art algorithms, simulation results on computer-generated and four real-world networks reflect the effectiveness of MIGA.

  9. An improved NAS-RIF algorithm for blind image restoration

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Jiang, Yanbin; Lou, Shuntian

    2007-01-01

    Image restoration is widely applied in many areas, but when operating on images with different scales for the representation of pixel intensity levels or low SNR, the traditional restoration algorithm lacks validity and induces noise amplification, ringing artifacts and poor convergent ability. In this paper, an improved NAS-RIF algorithm is proposed to overcome the shortcomings of the traditional algorithm. The improved algorithm proposes a new cost function which adds a space-adaptive regularization term and a disunity gain of the adaptive filter. In determining the support region, a pre-segmentation is used to form it close to the object in the image. Compared with the traditional algorithm, simulations show that the improved algorithm behaves better convergence, noise resistance and provides a better estimate of original image.

  10. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS.

    PubMed

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-01-01

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048

  11. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS

    PubMed Central

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-01-01

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048

  12. Optimization and Improvement of FOA Corner Cube Algorithm

    SciTech Connect

    McClay, W A; Awwal, A S; Burkhart, S C; Candy, J V

    2004-10-01

    Alignment of laser beams based on video images is a crucial task necessary to automate operation of the 192 beams at the National Ignition Facility (NIF). The final optics assembly (FOA) is the optical element that aligns the beam into the target chamber. This work presents an algorithm for determining the position of a corner cube alignment image in the final optics assembly. The improved algorithm was compared to the existing FOA algorithm on 900 noise-simulated images. While the existing FOA algorithm based on correlation with a synthetic template has a radial standard deviation of 1 pixel, the new algorithm based on classical matched filtering (CMF) and polynomial fit to the correlation peak improves the radial standard deviation performance to less than 0.3 pixels. In the new algorithm the templates are designed from real data stored during a year of actual operation.

  13. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-01-01

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position. PMID:25046019

  14. An improved harmony search algorithm for emergency inspection scheduling

    NASA Astrophysics Data System (ADS)

    Kallioras, Nikos A.; Lagaros, Nikos D.; Karlaftis, Matthew G.

    2014-11-01

    The ability of nature-inspired search algorithms to efficiently handle combinatorial problems, and their successful implementation in many fields of engineering and applied sciences, have led to the development of new, improved algorithms. In this work, an improved harmony search (IHS) algorithm is presented, while a holistic approach for solving the problem of post-disaster infrastructure management is also proposed. The efficiency of IHS is compared with that of the algorithms of particle swarm optimization, differential evolution, basic harmony search and the pure random search procedure, when solving the districting problem that is the first part of post-disaster infrastructure management. The ant colony optimization algorithm is employed for solving the associated routing problem that constitutes the second part. The comparison is based on the quality of the results obtained, the computational demands and the sensitivity on the algorithmic parameters.

  15. An improved sink particle algorithm for SPH simulations

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Walch, S.; Whitworth, A. P.

    2013-04-01

    Numerical simulations of star formation frequently rely on the implementation of sink particles: (a) to avoid expending computational resource on the detailed internal physics of individual collapsing protostars, (b) to derive mass functions, binary statistics and clustering kinematics (and hence to make comparisons with observation), and (c) to model radiative and mechanical feedback; sink particles are also used in other contexts, for example to represent accreting black holes in galactic nuclei. We present a new algorithm for creating and evolving sink particles in smoothed particle hydrodynamic (SPH) simulations, which appears to represent a significant improvement over existing algorithms - particularly in situations where sinks are introduced after the gas has become optically thick to its own cooling radiation and started to heat up by adiabatic compression. (i) It avoids spurious creation of sinks. (ii) It regulates the accretion of matter on to a sink so as to mitigate non-physical perturbations in the vicinity of the sink. (iii) Sinks accrete matter, but the associated angular momentum is transferred back to the surrounding medium. With the new algorithm - and modulo the need to invoke sufficient resolution to capture the physics preceding sink formation - the properties of sinks formed in simulations are essentially independent of the user-defined parameters of sink creation, or the number of SPH particles used.

  16. An improved Richardson-Lucy algorithm based on local prior

    NASA Astrophysics Data System (ADS)

    Yongpan, Wang; Huajun, Feng; Zhihai, Xu; Qi, Li; Chaoyue, Dai

    2010-07-01

    Ringing is one of the most common disturbing artifacts in image deconvolution. With a totally known kernel, the standard Richardson-Lucy (RL) algorithm succeeds in many motion deblurring processes, but the resulting images still contain visible ringing. When the estimated kernel is different from the real one, the result of the standard RL iterative algorithm will be worse. To suppress the ringing artifacts caused by failures in the blur kernel estimation, this paper improves the RL algorithm based on the local prior. Firstly, the standard deviation of pixels in the local window is computed to find the smooth region and the image gradient in the region is constrained to make its distribution consistent with the deblurring image gradient. Secondly, in order to suppress the ringing near the edge of a rigid body in the image, a new mask was obtained by computing the sharp edge of the image produced using the first step. If the kernel is large-scale, where the foreground is rigid and the background is smoothing, this step could produce a significant inhibitory effect on ringing artifacts. Thirdly, the boundary constraint is strengthened if the boundary is relatively smooth. As a result of the steps above, high-quality deblurred images can be obtained even when the estimated kernels are not perfectly accurate. On the basis of blurred images and the related kernel information taken by the additional hardware, our approach proved to be effective.

  17. Improving CMD Areal Density Analysis: Algorithms and Strategies

    NASA Astrophysics Data System (ADS)

    Wilson, R. E.

    2014-06-01

    Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD¡¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.

  18. An Improved DINEOF Algorithm for Filling Missing Values in Spatio-Temporal Sea Surface Temperature Data

    PubMed Central

    Ping, Bo; Su, Fenzhen; Meng, Yunshan

    2016-01-01

    In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time. PMID:27195692

  19. Training Feedforward Neural Networks: An Algorithm Giving Improved Generalization.

    PubMed

    Lee, Charles W.

    1997-01-01

    An algorithm is derived for supervised training in multilayer feedforward neural networks. Relative to the gradient descent backpropagation algorithm it appears to give both faster convergence and improved generalization, whilst preserving the system of backpropagating errors through the network. Copyright 1996 Elsevier Science Ltd. PMID:12662887

  20. An Improved Neutron Transport Algorithm for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.

    2010-01-01

    Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.

  1. Improved 3-D turbomachinery CFD algorithm

    NASA Technical Reports Server (NTRS)

    Janus, J. Mark; Whitfield, David L.

    1988-01-01

    The building blocks of a computer algorithm developed for the time-accurate flow analysis of rotating machines are described. The flow model is a finite volume method utilizing a high resolution approximate Riemann solver for interface flux definitions. This block LU implicit numerical scheme possesses apparent unconditional stability. Multi-block composite gridding is used to orderly partition the field into a specified arrangement. Block interfaces, including dynamic interfaces, are treated such as to mimic interior block communication. Special attention is given to the reduction of in-core memory requirements by placing the burden on secondary storage media. Broad applicability is implied, although the results presented are restricted to that of an even blade count configuration. Several other configurations are presently under investigation, the results of which will appear in subsequent publications.

  2. Multi-expert tracking algorithm based on improved compressive tracker

    NASA Astrophysics Data System (ADS)

    Feng, Yachun; Zhang, Hong; Yuan, Ding

    2015-12-01

    Object tracking is a challenging task in computer vision. Most state-of-the-art methods maintain an object model and update the object model by using new examples obtained incoming frames in order to deal with the variation in the appearance. It will inevitably introduce the model drift problem into the object model updating frame-by-frame without any censorship mechanism. In this paper, we adopt a multi-expert tracking framework, which is able to correct the effect of bad updates after they happened such as the bad updates caused by the severe occlusion. Hence, the proposed framework exactly has the ability which a robust tracking method should process. The expert ensemble is constructed of a base tracker and its formal snapshot. The tracking result is produced by the current tracker that is selected by means of a simple loss function. We adopt an improved compressive tracker as the base tracker in our work and modify it to fit the multi-expert framework. The proposed multi-expert tracking algorithm significantly improves the robustness of the base tracker, especially in the scenes with frequent occlusions and illumination variations. Experiments on challenging video sequences with comparisons to several state-of-the-art trackers demonstrate the effectiveness of our method and our tracking algorithm can run at real-time.

  3. Improved genetic algorithm for fast path planning of USV

    NASA Astrophysics Data System (ADS)

    Cao, Lu

    2015-12-01

    Due to the complex constraints, more uncertain factors and critical real-time demand of path planning for USV(Unmanned Surface Vehicle), an approach of fast path planning based on voronoi diagram and improved Genetic Algorithm is proposed, which makes use of the principle of hierarchical path planning. First the voronoi diagram is utilized to generate the initial paths and then the optimal path is searched by using the improved Genetic Algorithm, which use multiprocessors parallel computing techniques to improve the traditional genetic algorithm. Simulation results verify that the optimal time is greatly reduced and path planning based on voronoi diagram and the improved Genetic Algorithm is more favorable in the real-time operation.

  4. An improved edge detection algorithm for depth map inpainting

    NASA Astrophysics Data System (ADS)

    Chen, Weihai; Yue, Haosong; Wang, Jianhua; Wu, Xingming

    2014-04-01

    Three-dimensional (3D) measurement technology has been widely used in many scientific and engineering areas. The emergence of Kinect sensor makes 3D measurement much easier. However the depth map captured by Kinect sensor has some invalid regions, especially at object boundaries. These missing regions should be filled firstly. This paper proposes a depth-assisted edge detection algorithm and improves existing depth map inpainting algorithm using extracted edges. In the proposed algorithm, both color image and raw depth data are used to extract initial edges. Then the edges are optimized and are utilized to assist depth map inpainting. Comparative experiments demonstrate that the proposed edge detection algorithm can extract object boundaries and inhibit non-boundary edges caused by textures on object surfaces. The proposed depth inpainting algorithm can predict missing depth values successfully and has better performance than existing algorithm around object boundaries.

  5. An improvement on OCOG algorithm in satellite radar altimeter

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Jiu, Dehang

    The Offset Center of Gravity (OCOG) algorithm is a new tracking algorithm based on estimate of the pulse amplitude, the pulse width and the true center of area of the pulse. It's obvious that this algorithm is sufficiently robust to permit the altimeter to keep tracking many kinds of surfaces. Having analyzed the performance of this algorithm, it is discovered that the algorithm performs satisfactorily in high SNR environments, but fails in low SNR environments. The cause of the degradation of its performance is studied and it is pointed out that to the Brown return model and the sea ice return model, the performance of the OCOG algorithm can be improved in low SNR environments by using noise gate.

  6. An improved dehazing algorithm of aerial high-definition image

    NASA Astrophysics Data System (ADS)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  7. Improvement and implementation for Canny edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  8. Improved ZigBee Network Routing Algorithm Based on LEACH

    NASA Astrophysics Data System (ADS)

    Zhao, Yawei; Zhang, Guohua; Xia, Zhongwu; Li, Xinhua

    Energy efficiency design of routing protocol is a kind of the key technologies used to wireless sensor networks. The paper introduces the ZigBee technology, summarizes the current transmitting routing model in wireless sensor networks, and finds that the traditional LEACH protocol can lead to overload of some cluster head nodes. The paper suggested that the existing LEACH agreement was improved and the new algorithm was better than traditional LEACH routing algorithm by the comprasion of simulation. The improved routing algorithm can prolong the networks lifetime and effectively save the scarce energy.

  9. Improved interpretation of satellite altimeter data using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Messa, Kenneth; Lybanon, Matthew

    1992-01-01

    Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.

  10. An improved HMM/SVM dynamic hand gesture recognition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Yao, Yuanyuan; Luo, Yuan

    2015-10-01

    In order to improve the recognition rate and stability of dynamic hand gesture recognition, for the low accuracy rate of the classical HMM algorithm in train the B parameter, this paper proposed an improved HMM/SVM dynamic gesture recognition algorithm. In the calculation of the B parameter of HMM model, this paper introduced the SVM algorithm which has the strong ability of classification. Through the sigmoid function converted the state output of the SVM into the probability and treat this probability as the observation state transition probability of the HMM model. After this, it optimized the B parameter of HMM model and improved the recognition rate of the system. At the same time, it also enhanced the accuracy and the real-time performance of the human-computer interaction. Experiments show that this algorithm has a strong robustness under the complex background environment and the varying illumination environment. The average recognition rate increased from 86.4% to 97.55%.

  11. Improved motion information-based infrared dim target tracking algorithms

    NASA Astrophysics Data System (ADS)

    Lei, Liu; Zhijian, Huang

    2014-11-01

    Accurate and fast tracking of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. However, under complex backgrounds, such as clutter, varying illumination, and occlusion, the traditional tracking method often converges to a local maximum and loses the real infrared target. To cope with these problems, three improved tracking algorithm based on motion information are proposed in this paper, namely improved mean shift algorithm, improved Optical flow method and improved Particle Filter method. The basic principles and the implementing procedure of these modified algorithms for target tracking are described. Using these algorithms, the experiments on some real-life IR and color images are performed. The whole algorithm implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying tracking effectiveness and robustness. Meanwhile, it has high tracking efficiency and can be used for real-time tracking.

  12. An improved harmony search algorithm with dynamically varying bandwidth

    NASA Astrophysics Data System (ADS)

    Kalivarapu, J.; Jain, S.; Bag, S.

    2016-07-01

    The present work demonstrates a new variant of the harmony search (HS) algorithm where bandwidth (BW) is one of the deciding factors for the time complexity and the performance of the algorithm. The BW needs to have both explorative and exploitative characteristics. The ideology is to use a large BW to search in the full domain and to adjust the BW dynamically closer to the optimal solution. After trying a series of approaches, a methodology inspired by the functioning of a low-pass filter showed satisfactory results. This approach was implemented in the self-adaptive improved harmony search (SIHS) algorithm and tested on several benchmark functions. Compared to the existing HS algorithm and its variants, SIHS showed better performance on most of the test functions. Thereafter, the algorithm was applied to geometric parameter optimization of a friction stir welding tool.

  13. Automatic coronary lumen segmentation with partial volume modeling improves lesions' hemodynamic significance assessment

    NASA Astrophysics Data System (ADS)

    Freiman, M.; Lamash, Y.; Gilboa, G.; Nickisch, H.; Prevrhal, S.; Schmitt, H.; Vembar, M.; Goshen, L.

    2016-03-01

    The determination of hemodynamic significance of coronary artery lesions from cardiac computed tomography angiography (CCTA) based on blood flow simulations has the potential to improve CCTA's specificity, thus resulting in improved clinical decision making. Accurate coronary lumen segmentation required for flow simulation is challenging due to several factors. Specifically, the partial-volume effect (PVE) in small-diameter lumina may result in overestimation of the lumen diameter that can lead to an erroneous hemodynamic significance assessment. In this work, we present a coronary artery segmentation algorithm tailored specifically for flow simulations by accounting for the PVE. Our algorithm detects lumen regions that may be subject to the PVE by analyzing the intensity values along the coronary centerline and integrates this information into a machine-learning based graph min-cut segmentation framework to obtain accurate coronary lumen segmentations. We demonstrate the improvement in hemodynamic significance assessment achieved by accounting for the PVE in the automatic segmentation of 91 coronary artery lesions from 85 patients. We compare hemodynamic significance assessments by means of fractional flow reserve (FFR) resulting from simulations on 3D models generated by our segmentation algorithm with and without accounting for the PVE. By accounting for the PVE we improved the area under the ROC curve for detecting hemodynamically significant CAD by 29% (N=91, 0.85 vs. 0.66, p<0.05, Delong's test) with invasive FFR threshold of 0.8 as the reference standard. Our algorithm has the potential to facilitate non-invasive hemodynamic significance assessment of coronary lesions.

  14. Improved ant algorithms for software testing cases generation.

    PubMed

    Yang, Shunkun; Man, Tianlong; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to produce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  15. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  16. An improved algorithm for pedestrian detection

    NASA Astrophysics Data System (ADS)

    Yousef, Amr; Duraisamy, Prakash; Karim, Mohammad

    2015-03-01

    In this paper we present a technique to detect pedestrian. Histogram of gradients (HOG) and Haar wavelets with the aid of support vector machines (SVM) and AdaBoost classifiers show good identification performance on different objects classification including pedestrians. We propose a new shape descriptor derived from the intra-relationship between gradient orientations in a way similar to the HOG. The proposed descriptor is a two 2-D grid of orientation similarities measured at different offsets. The gradient magnitudes and phases derived from a sliding window with different scales and sizes are used to construct two 2-D symmetric grids. The first grid measures the co-occurence of the phases while the other one measures the corresponding percentage of gradient magnitudes for the measured orientation similarity. Since the resultant matrices will be symmetric, the feature vector is formed by concatenating the upper diagonal grid coefficients collected in a raster way. Classification is done using SVM classifier with radial basis kernel. Experimental results show improved performance compared to the current state-of-art techniques.

  17. Improved Inversion Algorithms for Near Surface Characterization

    NASA Astrophysics Data System (ADS)

    Astaneh, Ali Vaziri; Guddati, Murthy N.

    2016-05-01

    Near-surface geophysical imaging is often performed by generating surface waves, and estimating the subsurface properties through inversion, i.e. iteratively matching experimentally observed dispersion curves with predicted curves from a layered half-space model of the subsurface. Key to the effectiveness of inversion is the efficiency and accuracy of computing the dispersion curves and their derivatives. This paper presents improved methodologies for both dispersion curve and derivative computation. First, it is shown that the dispersion curves can be computed more efficiently by combining an unconventional complex-length finite element method (CFEM) to model the finite depth layers, with perfectly matched discrete layers (PMDL) to model the unbounded half-space. Second, based on analytical derivatives for theoretical dispersion curves, an approximate derivative is derived for so-called effective dispersion curve for realistic geophysical surface response data. The new derivative computation has a smoothing effect on the computation of derivatives, in comparison with traditional finite difference (FD) approach, and results in faster convergence. In addition, while the computational cost of FD differentiation is proportional to the number of model parameters, the new differentiation formula has a computational cost that is almost independent of the number of model parameters. At the end, as confirmed by synthetic and real-life imaging examples, the combination of CFEM+PMDL for dispersion calculation and the new differentiation formula results in more accurate estimates of the subsurface characteristics than the traditional methods, at a small fraction of computational effort.

  18. An improved robust ADMM algorithm for quantum state tomography

    NASA Astrophysics Data System (ADS)

    Li, Kezhi; Zhang, Hui; Kuang, Sen; Meng, Fangfang; Cong, Shuang

    2016-06-01

    In this paper, an improved adaptive weights alternating direction method of multipliers algorithm is developed to implement the optimization scheme for recovering the quantum state in nearly pure states. The proposed approach is superior to many existing methods because it exploits the low-rank property of density matrices, and it can deal with unexpected sparse outliers as well. The numerical experiments are provided to verify our statements by comparing the results to three different optimization algorithms, using both adaptive and fixed weights in the algorithm, in the cases of with and without external noise, respectively. The results indicate that the improved algorithm has better performances in both estimation accuracy and robustness to external noise. The further simulation results show that the successful recovery rate increases when more qubits are estimated, which in fact satisfies the compressive sensing theory and makes the proposed approach more promising.

  19. Visualizing and improving the robustness of phase retrieval algorithms

    SciTech Connect

    Tripathi, Ashish; Leyffer, Sven; Munson, Todd; Wild, Stefan M.

    2015-06-01

    Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.

  20. Economic load dispatch using improved gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Wang, Jia-rong; Guo, Feng

    2016-03-01

    This paper presents an improved gravitational search algorithm(IGSA) to solve the economic load dispatch(ELD) problem. In order to avoid the local optimum phenomenon, mutation processing is applied to the GSA. The IGSA is applied to solve the economic load dispatch problems with the valve point effects, which has 13 generators and a load demand of 2520 MW. Calculation results show that the algorithm in this paper can deal with the ELD problems with high stability.

  1. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  2. Quantifying dynamic sensitivity of optimization algorithm parameters to improve hydrological model calibration

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng

    2016-02-01

    It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.

  3. Improved Ant Colony Clustering Algorithm and Its Performance Study.

    PubMed

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  4. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  5. An Improved Physarum polycephalum Algorithm for the Shortest Path Problem

    PubMed Central

    Wang, Qing; Adamatzky, Andrew; Chan, Felix T. S.; Mahadevan, Sankaran

    2014-01-01

    Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960

  6. An improved Physarum polycephalum algorithm for the shortest path problem.

    PubMed

    Zhang, Xiaoge; Wang, Qing; Adamatzky, Andrew; Chan, Felix T S; Mahadevan, Sankaran; Deng, Yong

    2014-01-01

    Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960

  7. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    PubMed Central

    Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao

    2013-01-01

    In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments. PMID:23296331

  8. Significant improvements in long trace profiler measurement performance

    SciTech Connect

    Takacs, P.Z.; Bresloff, C.J.

    1996-07-01

    A Modifications made to the Long Trace Profiler (LTP II) system at the Advanced Photon Source at Argonne National Laboratory have significantly improved the accuracy and repeatability of the instrument The use of a Dove prism in the reference beam path corrects for phasing problems between mechanical efforts and thermally-induced system errors. A single reference correction now completely removes both error signals from the measured surface profile. The addition of a precision air conditioner keeps the temperature in the metrology enclosure constant to within {+-}0.1{degrees}C over a 24 hour period and has significantly improved the stability and repeatability of the system. We illustrate the performance improvements with several sets of measurements. The improved environmental control has reduced thermal drift error to about 0.75 microradian RMS over a 7.5 hour time period. Measurements made in the forward scan direction and the reverse scan direction differ by only about 0.5 microradian RMS over a 500mm, trace length. We are now able to put 1-sigma error bar of 0.3 microradian on an average of 10 slope profile measurements over a 500mm long trace length, and we are now able to put a 0.2 microradian error bar on an average of 10 measurements over a 200mm trace length. The corresponding 1-sigma height error bar for this measurement is 1.1 run.

  9. Significant improvements in Long Trace Profiler measurement performance

    SciTech Connect

    Takacs, P.Z.; Bresloff, C.J.

    1996-12-31

    Modifications made to the Long Trace Profiler (LTP II) system at the Advanced Photon Source at Argonne National Laboratory have significantly improved the accuracy and repeatability of the instrument. The use of a Dove prism in the reference beam path corrects for phasing problems between mechanical errors and thermally-induced system errors. A single reference correction now completely removes both error signals from the measured surface profile. The addition of a precision air conditioner keeps the temperature in the metrology enclosure constant to within {+-} 0.1 C over a 24 hour period and has significantly improved the stability and repeatability of the system. The authors illustrate the performance improvements with several sets of measurements. The improved environmental control has reduced thermal drift error to about 0.75 microradian RMS over a 7.5 hour time period. Measurements made in the forward scan direction and the reverse scan direction differ by only about 0.5 microradian RMS over a 500 mm trace length. They are now able to put 1-sigma error bar of 0.3 microradian on an average of 10 slope profile measurements over a 500 mm long trace length, and they are now able to put a 0.2 microradian error bar on an average of 10 measurements over a 200 mm trace length. The corresponding 1-sigma height error bar for this measurement is 1.1 nm.

  10. Tuning target selection algorithms to improve galaxy redshift estimates

    NASA Astrophysics Data System (ADS)

    Hoyle, Ben; Paech, Kerstin; Rau, Markus Michael; Seitz, Stella; Weller, Jochen

    2016-06-01

    We showcase machine learning (ML) inspired target selection algorithms to determine which of all potential targets should be selected first for spectroscopic follow-up. Efficient target selection can improve the ML redshift uncertainties as calculated on an independent sample, while requiring less targets to be observed. We compare seven different ML targeting algorithms with the Sloan Digital Sky Survey (SDSS) target order, and with a random targeting algorithm. The ML inspired algorithms are constructed iteratively by estimating which of the remaining target galaxies will be most difficult for the ML methods to accurately estimate redshifts using the previously observed data. This is performed by predicting the expected redshift error and redshift offset (or bias) of all of the remaining target galaxies. We find that the predicted values of bias and error are accurate to better than 10-30 per cent of the true values, even with only limited training sample sizes. We construct a hypothetical follow-up survey and find that some of the ML targeting algorithms are able to obtain the same redshift predictive power with 2-3 times less observing time, as compared to that of the SDSS, or random, target selection algorithms. The reduction in the required follow-up resources could allow for a change to the follow-up strategy, for example by obtaining deeper spectroscopy, which could improve ML redshift estimates for deeper test data.

  11. Recent Algorithmic and Computational Efficiency Improvements in the NIMROD Code

    NASA Astrophysics Data System (ADS)

    Plimpton, S. J.; Sovinec, C. R.; Gianakon, T. A.; Parker, S. E.

    1999-11-01

    Extreme anisotropy and temporal stiffness impose severe challenges to simulating low frequency, nonlinear behavior in magnetized fusion plasmas. To address these challenges in computations of realistic experiment configurations, NIMROD(Glasser, et al., Plasma Phys. Control. Fusion 41) (1999) A747. uses a time-split, semi-implicit advance of the two-fluid equations for magnetized plasmas with a finite element/Fourier series spatial representation. The stiffness and anisotropy lead to ill-conditioned linear systems of equations, and they emphasize any truncation errors that may couple different modes of the continuous system. Recent work significantly improves NIMROD's performance in these areas. Implementing a parallel global preconditioning scheme in structured-grid regions permits scaling to large problems and large time steps, which are critical for achieving realistic S-values. In addition, coupling to the AZTEC parallel linear solver package now permits efficient computation with regions of unstructured grid. Changes in the time-splitting scheme improve numerical behavior in simulations with strong flow, and quadratic basis elements are being explored for accuracy. Different numerical forms of anisotropic thermal conduction, critical for slow island evolution, are compared. Algorithms for including gyrokinetic ions in the finite element computations are discussed.

  12. A landmark matching algorithm using the improved generalised Hough transform

    NASA Astrophysics Data System (ADS)

    Chen, Binbin; Deng, Xingpu

    2015-10-01

    The paper addresses the issue on landmark matching of images from Geosynchronous Earth Orbit (GEO) satellites. In general, satellite imagery is matched against the base image, which is predefined. When the satellite imagery rotation occurs, the accuracy of many landmark matching algorithms deteriorates. To overcome this problem, generalised Hough transform (GHT) is employed for landmark matching. At first an improved GHT algorithm is proposed to enhance rotational invariance. Secondly a global coastline is processed to generate the test image as the satellite image and the base image. Then the test image is matched against the base image using the proposed algorithm. The matching results show that the proposed algorithm is rotation invariant and works well in landmark matching.

  13. Improved zonal wavefront reconstruction algorithm for Hartmann type test with arbitrary grid patterns

    NASA Astrophysics Data System (ADS)

    Li, Mengyang; Li, Dahai; Zhang, Chen; E, Kewei; Hong, Zhihan; Li, Chengxu

    2015-08-01

    Zonal wavefront reconstruction by use of the well known Southwell algorithm with rectangular grid patterns has been considered in the literature. However, when the grid patterns are nonrectangular, modal wavefront reconstruction has been extensively used. We propose an improved zonal wavefront reconstruction algorithm for Hartmann type test with arbitrary grid patterns. We develop the mathematical expressions to show that the wavefront over arbitrary grid patterns, such as misaligned, partly obscured, and non-square mesh grids, can be estimated well. Both iterative solution and least-square solution for the proposed algorithm are described and compared. Numerical calculation shows that the zonal wavefront reconstruction over nonrectangular profile with the proposed algorithm results in a significant improvement in comparison with the Southwell algorithm.

  14. An improved back projection algorithm of ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Xiaozhen, Chen; Mingxu, Su; Xiaoshu, Cai

    2014-04-01

    Binary logic back projection algorithm is improved in this work for the development of fast ultrasound tomography system with a better effect of image reconstruction. The new algorithm is characterized by an extra logical value `2' and dual-threshold processing of collected raw data. To compare with the original algorithm, a numerical simulation was conducted by the verification of COMSOL simulations formerly, and then a set of ultrasonic tomography system is established to perform the experiments of one, two and three cylindrical objects. The object images are reconstructed through the inversion of signals matrix acquired by the transducer array after a preconditioning, while the corresponding spatial imaging errors can obviously indicate that the improved back projection method can achieve modified inversion effect.

  15. An improved back projection algorithm of ultrasound tomography

    SciTech Connect

    Xiaozhen, Chen; Mingxu, Su; Xiaoshu, Cai

    2014-04-11

    Binary logic back projection algorithm is improved in this work for the development of fast ultrasound tomography system with a better effect of image reconstruction. The new algorithm is characterized by an extra logical value ‘2’ and dual-threshold processing of collected raw data. To compare with the original algorithm, a numerical simulation was conducted by the verification of COMSOL simulations formerly, and then a set of ultrasonic tomography system is established to perform the experiments of one, two and three cylindrical objects. The object images are reconstructed through the inversion of signals matrix acquired by the transducer array after a preconditioning, while the corresponding spatial imaging errors can obviously indicate that the improved back projection method can achieve modified inversion effect.

  16. Sonocrystallization yields monoclinic paracetamol with significantly improved compaction behavior.

    PubMed

    Bučar, Dejan-Krešimir; Elliott, James A; Eddleston, Mark D; Cockcroft, Jeremy K; Jones, William

    2015-01-01

    Ultrasound-assisted crystallization (sonocrystallization) was used to prepare a mixture of nano- and micrometer-sized crystals of the monoclinic form of paracetamol-a widely used analgesic known for its particularly problematic mechanical behavior under compression (i.e. poor tabletability). The nano- and micrometer-sized crystals yielded a powder which exhibits elastic moduli and bulk cohesions that are significantly higher than those observed in samples consisting of macrometer-sized crystals, thus leading to enhanced tabletability without the use of excipients, particle coating, salt, or cocrystal formation. Experimental compaction and finite element analysis were utilized to rationalize the significantly improved compaction behavior of the monoclinic form of paracetamol. PMID:25370777

  17. An Improved Algorithm for Retrieving Surface Downwelling Longwave Radiation from Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.

    2007-01-01

    Zhou and Cess [2001] developed an algorithm for retrieving surface downwelling longwave radiation (SDLW) based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for scenes that were covered with ice clouds. An improved version of the algorithm prevents the large errors in the SDLW at low water vapor amounts by taking into account that under such conditions the SDLW and water vapor amount are nearly linear in their relationship. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths available from the Cloud and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) product to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing and will be incorporated as one of the CERES empirical surface radiation algorithms.

  18. Improved algorithm for quantum separability and entanglement detection

    SciTech Connect

    Ioannou, L.M.; Ekert, A.K.; Travaglione, B.C.; Cheung, D.

    2004-12-01

    Determining whether a quantum state is separable or entangled is a problem of fundamental importance in quantum information science. It has recently been shown that this problem is NP-hard, suggesting that an efficient, general solution does not exist. There is a highly inefficient 'basic algorithm' for solving the quantum separability problem which follows from the definition of a separable state. By exploiting specific properties of the set of separable states, we introduce a classical algorithm that solves the problem significantly faster than the 'basic algorithm', allowing a feasible separability test where none previously existed, e.g., in 3x3-dimensional systems. Our algorithm also provides a unique tool in the experimental detection of entanglement.

  19. An improved algorithm of a priori based on geostatistics

    NASA Astrophysics Data System (ADS)

    Chen, Jiangping; Wang, Rong; Tang, Xuehua

    2008-12-01

    In data mining one of the classical algorithms is Apriori which has been developed for association rule mining in large transaction database. And it cannot been directly used in spatial association rules mining. The main difference between data mining in relational DB and in spatial DB is that attributes of the neighbors of some object of interest may have an influence on the object and therefore have to be considered as well. The explicit location and extension of spatial objects define implicit relations of spatial neighborhood (such as topological, distance and direction relations) which are used by spatial data mining algorithms. Therefore, new techniques are required for effective and efficient spatial data mining. Geostatistics are statistical methods used to describe spatial relationships among sample data and to apply this analysis to the prediction of spatial and temporal phenomena. They are used to explain spatial patterns and to interpolate values at unsampled locations. This paper put forward an improved algorithm of Apriori about mining association rules with geostatistics. First the spatial autocorrelation of the attributes with location were estimated with the geostatistics methods such as kriging and Spatial Autoregressive Model (SAR). Then a spatial autocorrelation model of the attributes were built. Later an improved algorithm of apriori combined with the spatial autocorrelation model were offered to mine the spatial association rules. Last an experiment of the new algorithm were carried out on the hayfever incidence and climate factors in UK. The result shows that the output rules is matched with the references.

  20. [An improved medical image fusion algorithm and quality evaluation].

    PubMed

    Chen, Meiling; Tao, Ling; Qian, Zhiyu

    2009-08-01

    Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform. PMID:19813594

  1. An Improved Wind Speed Retrieval Algorithm For The CYGNSS Mission

    NASA Astrophysics Data System (ADS)

    Ruf, C. S.; Clarizia, M. P.

    2015-12-01

    The NASA spaceborne Cyclone Global Navigation Satellite System (CYGNSS) mission is a constellation of 8 microsatellites focused on tropical cyclone (TC) inner core process studies. CYGNSS will be launched in October 2016, and will use GPS-Reflectometry (GPS-R) to measure ocean surface wind speed in all precipitating conditions, and with sufficient frequency to resolve genesis and rapid intensification. Here we present a modified and improved version of the current baseline Level 2 (L2) wind speed retrieval algorithm designed for CYGNSS. An overview of the current approach is first presented, which makes use of two different observables computed from 1-second Level 1b (L1b) delay-Doppler Maps (DDMs) of radar cross section. The first observable, the Delay-Doppler Map Average (DDMA), is the averaged radar cross section over a delay-Doppler window around the DDM peak (i.e. the specular reflection point coordinate in delay and Doppler). The second, the Leading Edge Slope (LES), is the leading edge of the Integrated Delay Waveform (IDW), obtained by integrating the DDM along the Doppler dimension. The observables are calculated over a limited range of time delays and Doppler frequencies to comply with baseline spatial resolution requirements for the retrieved winds, which in the case of CYGNSS is 25 km. In the current approach, the relationship between the observable value and the surface winds is described by an empirical Geophysical Model Function (GMF) that is characterized by a very high slope in the high wind regime, for both DDMA and LES observables, causing large errors in the retrieval at high winds. A simple mathematical modification of these observables is proposed, which linearizes the relationship between ocean surface roughness and the observables. This significantly reduces the non-linearity present in the GMF that relate the observables to the wind speed, and reduces the root-mean square error between true and retrieved winds, particularly in the high wind

  2. Improvements in antenna coupling path algorithms for aircraft EMC analysis

    NASA Astrophysics Data System (ADS)

    Bogusz, Michael; Kibina, Stanley J.

    The algorithms to calculate and display the path of maximum electromagnetic interference coupling along the perfectly conducting surface of a frustrum cone model of an aircraft nose are developed and revised for the Aircraft Inter-Antenna Propagation with Graphics (AAPG) electromagnetic compatibility analysis code. Analysis of the coupling problem geometry on the frustrum cone model and representative numerical test cases reveal how the revised algorithms are more accurate than their predecessors. These improvements in accuracy and their impact on realistic aircraft electromagnetic compatibility problems are outlined.

  3. Improved MCA-TV algorithm for interference hyperspectral image decomposition

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Zhao, Junsuo; Cailing, Wang

    2015-12-01

    The technology of interference hyperspectral imaging, which can get the spectral and spatial information of the observed targets, is a very powerful technology in the field of remote sensing. Due to the special imaging principle, there are many position-fixed interference fringes in each frame of the interference hyperspectral image (IHI) data. This characteristic will affect the result of compressed sensing theory and traditional compression algorithms used on IHI data. According to this characteristic of the IHI data, morphological component analysis (MCA) is adopted to separate the interference fringes layers and the background layers of the LSMIS (Large Spatially Modulated Interference Spectral Image) data, and an improved MCA and Total Variation (TV) combined algorithm is proposed in this paper. An update mode of the threshold in traditional MCA is proposed, and the traditional TV algorithm is also improved according to the unidirectional characteristic of the interference fringes in IHI data. The experimental results prove that the proposed improved MCA-TV (IMT) algorithm can get better results than the traditional MCA, and also can meet the convergence conditions much faster than the traditional MCA.

  4. Masseter segmentation using an improved watershed algorithm with unsupervised classification.

    PubMed

    Ng, H P; Ong, S H; Foong, K W C; Goh, P S; Nowinski, W L

    2008-02-01

    The watershed algorithm always produces a complete division of the image. However, it is susceptible to over-segmentation and sensitivity to false edges. In medical images this leads to unfavorable representations of the anatomy. We address these drawbacks by introducing automated thresholding and post-segmentation merging. The automated thresholding step is based on the histogram of the gradient magnitude map while post-segmentation merging is based on a criterion which measures the similarity in intensity values between two neighboring partitions. Our improved watershed algorithm is able to merge more than 90% of the initial partitions, which indicates that a large amount of over-segmentation has been reduced. To further improve the segmentation results, we make use of K-means clustering to provide an initial coarse segmentation of the highly textured image before the improved watershed algorithm is applied to it. When applied to the segmentation of the masseter from 60 magnetic resonance images of 10 subjects, the proposed algorithm achieved an overlap index (kappa) of 90.6%, and was able to merge 98% of the initial partitions on average. The segmentation results are comparable to those obtained using the gradient vector flow snake. PMID:17950265

  5. Image enhancement algorithm based on improved lateral inhibition network

    NASA Astrophysics Data System (ADS)

    Yun, Haijiao; Wu, Zhiyong; Wang, Guanjun; Tong, Gang; Yang, Hua

    2016-05-01

    There is often substantial noise and blurred details in the images captured by cameras. To solve this problem, we propose a novel image enhancement algorithm combined with an improved lateral inhibition network. Firstly, we built a mathematical model of a lateral inhibition network in conjunction with biological visual perception; this model helped to realize enhanced contrast and improved edge definition in images. Secondly, we proposed that the adaptive lateral inhibition coefficient adhere to an exponential distribution thus making the model more flexible and more universal. Finally, we added median filtering and a compensation measure factor to build the framework with high pass filtering functionality thus eliminating image noise and improving edge contrast, addressing problems with blurred image edges. Our experimental results show that our algorithm is able to eliminate noise and the blurring phenomena, and enhance the details of visible and infrared images.

  6. Improving the trust algorithm of information in semantic web

    NASA Astrophysics Data System (ADS)

    Wan, Zong-bao; Min, Jiang

    2012-01-01

    With the rapid development of computer networks, especially with the introduction of the Semantic Web perspective, the problem of trust computation in the network has become an important research part of current computer system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes, the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally, stimulations and analyses show that the improved algorithm can effectively improve the trust of information more accurately.

  7. Improving the trust algorithm of information in semantic web

    NASA Astrophysics Data System (ADS)

    Wan, Zong-Bao; Min, Jiang

    2011-12-01

    With the rapid development of computer networks, especially with the introduction of the Semantic Web perspective, the problem of trust computation in the network has become an important research part of current computer system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes, the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally, stimulations and analyses show that the improved algorithm can effectively improve the trust of information more accurately.

  8. An improved optical identity authentication system with significant output images

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Liu, Ming-tang; Yao, Shu-xia; Xin, Yan-hui

    2012-06-01

    An improved method for optical identity authentication system with significant output images is proposed. In this method, a predefined image is digitally encoded into two phase-masks relating to a fixed phase-mask, and this fixed phase-mask acts as a lock to the system. When the two phase-masks, serving as the key, are presented to the system, the predefined image is generated at the output. In addition to simple verification, our method is capable of identifying the type of input phase-mask, and the duties of identity verification and recognition are separated and, respectively, assigned to the amplitude and phase of the output image. Numerical simulation results show that our proposed method is feasible and the output image with better image quality can be obtained.

  9. Warfarin improves neuropathy in monoclonal gammopathy of undetermined significance.

    PubMed

    Henry Gomez, Teny; Holkova, Beata; Noreika, Danielle; Del Fabbro, Egidio

    2016-01-01

    We report a case of a 60-year-old man who was referred to a palliative care clinic with monoclonal gammopathy of undetermined significance (MGUS)-associated neuropathy, responding to a therapeutic trial of warfarin. Electromyography showed distal symmetric sensory axonal neuropathy. The patient reported having had improvement of his neuropathic symptoms while taking warfarin postoperatively for thromboprophylaxis 1 year prior, and recurrence of his symptoms after the warfarin was discontinued. The patient was rechallenged with a trial of warfarin, targeting an international normalised ratio of 1.5-2.0. His pain scores decreased from 5/10 to 3/10 at 1 month and symptom improvement was maintained through 24 months of follow-up. Warfarin had a remarkable impact on our patient's symptoms and quality of life. The mechanisms mediating the symptomatic benefit with warfarin are unclear; however, a placebo effect is unlikely. Further studies may help guide the use of warfarin for MGUS-associated neuropathy. PMID:27317760

  10. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    PubMed Central

    Amudha, P.; Karthik, S.; Sivakumari, S.

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  11. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.

    PubMed

    Amudha, P; Karthik, S; Sivakumari, S

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  12. Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm

    NASA Technical Reports Server (NTRS)

    Riggs, George; Hall, Dorothy K.

    2012-01-01

    The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).

  13. Improved algorithm for solving nonlinear parabolized stability equations

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Zhang, Cun-bo; Liu, Jian-xin; Luo, Ji-sheng

    2016-08-01

    Due to its high computational efficiency and ability to consider nonparallel and nonlinear effects, nonlinear parabolized stability equations (NPSE) approach has been widely used to study the stability and transition mechanisms. However, it often diverges in hypersonic boundary layers when the amplitude of disturbance reaches a certain level. In this study, an improved algorithm for solving NPSE is developed. In this algorithm, the mean flow distortion is included into the linear operator instead of into the nonlinear forcing terms in NPSE. An under-relaxation factor for computing the nonlinear terms is introduced during the iteration process to guarantee the robustness of the algorithm. Two case studies, the nonlinear development of stationary crossflow vortices and the fundamental resonance of the second mode disturbance in hypersonic boundary layers, are presented to validate the proposed algorithm for NPSE. Results from direct numerical simulation (DNS) are regarded as the baseline for comparison. Good agreement can be found between the proposed algorithm and DNS, which indicates the great potential of the proposed method on studying the crossflow and streamwise instability in hypersonic boundary layers. Project supported by the National Natural Science Foundation of China (Grant Nos. 11332007 and 11402167).

  14. Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms

    PubMed Central

    Petrantonakis, Panagiotis C.; Poirazi, Panayiota

    2015-01-01

    Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG. PMID:25635776

  15. Improved restoration algorithm for weakly blurred and strongly noisy image

    NASA Astrophysics Data System (ADS)

    Liu, Qianshun; Xia, Guo; Zhou, Haiyang; Bai, Jian; Yu, Feihong

    2015-10-01

    In real applications, such as consumer digital imaging, it is very common to record weakly blurred and strongly noisy images. Recently, a state-of-art algorithm named geometric locally adaptive sharpening (GLAS) has been proposed. By capturing local image structure, it can effectively combine denoising and sharpening together. However, there still exist two problems in the practice. On one hand, two hard thresholds have to be constantly adjusted with different images so as not to produce over-sharpening artifacts. On the other hand, the smoothing parameter must be manually set precisely. Otherwise, it will seriously magnify the noise. However, these parameters have to be set in advance and totally empirically. In a practical application, this is difficult to achieve. Thus, it is not easy to use and not smart enough. In an effort to improve the restoration effect of this situation by way of GLAS, an improved GLAS (IGLAS) algorithm by introducing the local phase coherence sharpening Index (LPCSI) metric is proposed in this paper. With the help of LPCSI metric, the two hard thresholds can be fixed at constant values for all images. Compared to the original method, the thresholds in our new algorithm no longer need to change with different images. Based on our proposed IGLAS, its automatic version is also developed in order to compensate for the disadvantages of manual intervention. Simulated and real experimental results show that the proposed algorithm can not only obtain better performances compared with the original method, but it is very easy to apply.

  16. An improved algorithm for geocentric to geodetic coordinate conversion

    SciTech Connect

    Toms, R.

    1996-02-01

    The problem of performing transformations from geocentric to geodetic coordinates has received an inordinate amount of attention in the literature. Numerous approximate methods have been published. Almost none of the publications address the issue of efficiency and in most cases there is a paucity of error analysis. Recently there has been a surge of interest in this problem aimed at developing more efficient methods for real time applications such as DIS. Iterative algorithms have been proposed that are not of optimal efficiency, address only one error component and require a small but uncertain number of relatively expensive iterations for convergence. In a recent paper published by the author a new algorithm was proposed for the transformation of geocentric to geodetic coordinates. The new algorithm was tested at the Visual Systems Laboratory at the Institute for Simulation and Training, the University of Central Florida, and found to be 30 percent faster than the best previously published algorithm. In this paper further improvements are made in terms of efficiency. For completeness and to make this paper more readable, it was decided to revise the previous paper and to publish it as a new report. The introduction describes the improvements in more detail.

  17. An Improved Neutron Transport Algorithm for Space Radiation

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.

    2000-01-01

    A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.

  18. Ceramic Composite Intermediate Temperature Stress-Rupture Properties Improved Significantly

    NASA Technical Reports Server (NTRS)

    Morscher, Gregory N.; Hurst, Janet B.

    2002-01-01

    Silicon carbide (SiC) composites are considered to be potential materials for future aircraft engine parts such as combustor liners. It is envisioned that on the hot side (inner surface) of the combustor liner, composites will have to withstand temperatures in excess of 1200 C for thousands of hours in oxidizing environments. This is a severe condition; however, an equally severe, if not more detrimental, condition exists on the cold side (outer surface) of the combustor liner. Here, the temperatures are expected to be on the order of 800 to 1000 C under high tensile stress because of thermal gradients and attachment of the combustor liner to the engine frame (the hot side will be under compressive stress, a less severe stress-state for ceramics). Since these composites are not oxides, they oxidize. The worst form of oxidation for strength reduction occurs at these intermediate temperatures, where the boron nitride (BN) interphase oxidizes first, which causes the formation of a glass layer that strongly bonds the fibers to the matrix. When the fibers strongly bond to the matrix or to one another, the composite loses toughness and strength and becomes brittle. To increase the intermediate temperature stress-rupture properties, researchers must modify the BN interphase. With the support of the Ultra-Efficient Engine Technology (UEET) Program, significant improvements were made as state-of-the-art SiC/SiC composites were developed during the Enabling Propulsion Materials (EPM) program. Three approaches were found to improve the intermediate-temperature stress-rupture properties: fiber-spreading, high-temperature silicon- (Si) doped boron nitride (BN), and outside-debonding BN.

  19. An improved proportionate normalized least-mean-square algorithm for broadband multipath channel estimation.

    PubMed

    Li, Yingsong; Hamamura, Masanori

    2014-01-01

    To make use of the sparsity property of broadband multipath wireless communication channels, we mathematically propose an l p -norm-constrained proportionate normalized least-mean-square (LP-PNLMS) sparse channel estimation algorithm. A general l p -norm is weighted by the gain matrix and is incorporated into the cost function of the proportionate normalized least-mean-square (PNLMS) algorithm. This integration is equivalent to adding a zero attractor to the iterations, by which the convergence speed and steady-state performance of the inactive taps are significantly improved. Our simulation results demonstrate that the proposed algorithm can effectively improve the estimation performance of the PNLMS-based algorithm for sparse channel estimation applications. PMID:24782663

  20. An Improved Proportionate Normalized Least-Mean-Square Algorithm for Broadband Multipath Channel Estimation

    PubMed Central

    2014-01-01

    To make use of the sparsity property of broadband multipath wireless communication channels, we mathematically propose an lp-norm-constrained proportionate normalized least-mean-square (LP-PNLMS) sparse channel estimation algorithm. A general lp-norm is weighted by the gain matrix and is incorporated into the cost function of the proportionate normalized least-mean-square (PNLMS) algorithm. This integration is equivalent to adding a zero attractor to the iterations, by which the convergence speed and steady-state performance of the inactive taps are significantly improved. Our simulation results demonstrate that the proposed algorithm can effectively improve the estimation performance of the PNLMS-based algorithm for sparse channel estimation applications. PMID:24782663

  1. Algorithmic improvements to an exact region-filling technique

    NASA Astrophysics Data System (ADS)

    Elias Fabris, Antonio; Ramos Batista, Valério

    2015-09-01

    We present many algorithmic improvements in our early region filling technique, which in a previous publication was already proved to be correct for all connected digital pictures. Ours is an integer-only method that also finds all interior points of any given digital picture by displaying and storing them in a locating matrix. Our filling/locating program is applicable both in computer graphics and image processing.

  2. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    NASA Astrophysics Data System (ADS)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  3. Reconstruction algorithm improving the spatial resolution of Micro-CT

    NASA Astrophysics Data System (ADS)

    Fu, Jian; Wei, Dongbo; Li, Bing; Zhang, Lei

    2008-03-01

    X-ray Micro computed tomography (Micro-CT) enables nondestructive visualization of the internal structure of objects with high-resolution images and plays an important role for industrial nondestructive testing, material evaluation and medical researches. Because the micro focus is much smaller than the ordinary focus, the geometry un-sharpness of Micro-CT projection is several decuples less than that of ordinary CT systems. So the scan conditions with high geometry magnification can be adopted to acquire the projection data with high sampling frequency. Based on this feature, a new filter back projection reconstruction algorithm is researched to improve the spatial resolution of Micro-CT. This algorithm permits the reconstruction center at any point on the line connecting the focus and the rotation center. It can reconstruct CT images with different geometry magnification by adjusting the position of the reconstruction center. So it can make the best of the above feature to improve the spatial resolution of Micro-CT. The computer simulation and the CT experiment of a special spatial resolution phantom are executed to check the validity of this method. The results demonstrate the effect of the new algorithm. Analysis shows that the spatial resolution can be improved 50%.

  4. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  5. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  6. Efficient Improvement of Silage Additives by Using Genetic Algorithms

    PubMed Central

    Davies, Zoe S.; Gilbert, Richard J.; Merry, Roger J.; Kell, Douglas B.; Theodorou, Michael K.; Griffith, Gareth W.

    2000-01-01

    The enormous variety of substances which may be added to forage in order to manipulate and improve the ensilage process presents an empirical, combinatorial optimization problem of great complexity. To investigate the utility of genetic algorithms for designing effective silage additive combinations, a series of small-scale proof of principle silage experiments were performed with fresh ryegrass. Having established that significant biochemical changes occur over an ensilage period as short as 2 days, we performed a series of experiments in which we used 50 silage additive combinations (prepared by using eight bacterial and other additives, each of which was added at six different levels, including zero [i.e., no additive]). The decrease in pH, the increase in lactate concentration, and the free amino acid concentration were measured after 2 days and used to calculate a “fitness” value that indicated the quality of the silage (compared to a control silage made without additives). This analysis also included a “cost” element to account for different total additive levels. In the initial experiment additive levels were selected randomly, but subsequently a genetic algorithm program was used to suggest new additive combinations based on the fitness values determined in the preceding experiments. The result was very efficient selection for silages in which large decreases in pH and high levels of lactate occurred along with low levels of free amino acids. During the series of five experiments, each of which comprised 50 treatments, there was a steady increase in the amount of lactate that accumulated; the best treatment combination was that used in the last experiment, which produced 4.6 times more lactate than the untreated silage. The additive combinations that were found to yield the highest fitness values in the final (fifth) experiment were assessed to determine a range of biochemical and microbiological quality parameters during full-term silage

  7. SHIFTX2: significantly improved protein chemical shift prediction.

    PubMed

    Han, Beomsoo; Liu, Yifeng; Ginzinger, Simon W; Wishart, David S

    2011-05-01

    A new computer program, called SHIFTX2, is described which is capable of rapidly and accurately calculating diamagnetic (1)H, (13)C and (15)N chemical shifts from protein coordinate data. Compared to its predecessor (SHIFTX) and to other existing protein chemical shift prediction programs, SHIFTX2 is substantially more accurate (up to 26% better by correlation coefficient with an RMS error that is up to 3.3× smaller) than the next best performing program. It also provides significantly more coverage (up to 10% more), is significantly faster (up to 8.5×) and capable of calculating a wider variety of backbone and side chain chemical shifts (up to 6×) than many other shift predictors. In particular, SHIFTX2 is able to attain correlation coefficients between experimentally observed and predicted backbone chemical shifts of 0.9800 ((15)N), 0.9959 ((13)Cα), 0.9992 ((13)Cβ), 0.9676 ((13)C'), 0.9714 ((1)HN), 0.9744 ((1)Hα) and RMS errors of 1.1169, 0.4412, 0.5163, 0.5330, 0.1711, and 0.1231 ppm, respectively. The correlation between SHIFTX2's predicted and observed side chain chemical shifts is 0.9787 ((13)C) and 0.9482 ((1)H) with RMS errors of 0.9754 and 0.1723 ppm, respectively. SHIFTX2 is able to achieve such a high level of accuracy by using a large, high quality database of training proteins (>190), by utilizing advanced machine learning techniques, by incorporating many more features (χ(2) and χ(3) angles, solvent accessibility, H-bond geometry, pH, temperature), and by combining sequence-based with structure-based chemical shift prediction techniques. With this substantial improvement in accuracy we believe that SHIFTX2 will open the door to many long-anticipated applications of chemical shift prediction to protein structure determination, refinement and validation. SHIFTX2 is available both as a standalone program and as a web server ( http://www.shiftx2.ca ). PMID:21448735

  8. Improving permafrost distribution modelling using feature selection algorithms

    NASA Astrophysics Data System (ADS)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  9. Improved Reversible Jump Algorithms for Bayesian Species Delimitation

    PubMed Central

    Rannala, Bruce; Yang, Ziheng

    2013-01-01

    Several computational methods have recently been proposed for delimiting species using multilocus sequence data. Among them, the Bayesian method of Yang and Rannala uses the multispecies coalescent model in the likelihood framework to calculate the posterior probabilities for the different species-delimitation models. It has a sound statistical basis and is found to have nice statistical properties in simulation studies, such as low error rates of undersplitting and oversplitting. However, the method suffers from poor mixing of the reversible-jump Markov chain Monte Carlo (rjMCMC) algorithms. Here, we describe several modifications to the algorithms. We propose a flexible prior that allows the user to specify the probability that each node on the guide tree represents a true speciation event. We also introduce modifications to the rjMCMC algorithms that remove the constraint on the new species divergence time when splitting and alter the gene trees to remove incompatibilities. The new algorithms are found to improve mixing of the Markov chain for both simulated and empirical data sets. PMID:23502678

  10. Improved delay-leaping simulation algorithm for biochemical reaction systems with delays

    NASA Astrophysics Data System (ADS)

    Yi, Na; Zhuang, Gang; Da, Liang; Wang, Yifei

    2012-04-01

    In biochemical reaction systems dominated by delays, the simulation speed of the stochastic simulation algorithm depends on the size of the wait queue. As a result, it is important to control the size of the wait queue to improve the efficiency of the simulation. An improved accelerated delay stochastic simulation algorithm for biochemical reaction systems with delays, termed the improved delay-leaping algorithm, is proposed in this paper. The update method for the wait queue is effective in reducing the size of the queue as well as shortening the storage and access time, thereby accelerating the simulation speed. Numerical simulation on two examples indicates that this method not only obtains a more significant efficiency compared with the existing methods, but also can be widely applied in biochemical reaction systems with delays.

  11. An improved piecewise linear chaotic map based image encryption algorithm.

    PubMed

    Hu, Yuping; Zhu, Congxu; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack. PMID:24592159

  12. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    PubMed Central

    Hu, Yuping; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack. PMID:24592159

  13. Missile placement analysis based on improved SURF feature matching algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kaida; Zhao, Wenjie; Li, Dejun; Gong, Xiran; Sheng, Qian

    2015-03-01

    The precious battle damage assessment by use of video images to analysis missile placement is a new study area. The article proposed an improved speeded up robust features algorithm named restricted speeded up robust features, which combined the combat application of TV-command-guided missiles and the characteristics of video image. Its restrictions mainly reflected in two aspects, one is to restrict extraction area of feature point; the second is to restrict the number of feature points. The process of missile placement analysis based on video image was designed and a video splicing process and random sample consensus purification were achieved. The RSURF algorithm is proved that has good realtime performance on the basis of guarantee the accuracy.

  14. Improvement of Passive Microwave Rainfall Retrieval Algorithm over Mountainous Terrain

    NASA Astrophysics Data System (ADS)

    Shige, S.; Yamamoto, M.

    2015-12-01

    The microwave radiometer (MWR) algorithms underestimate heavy rainfall associated with shallow orographic rainfall systems owing to weak ice scattering signatures. Underestimation of the Global Satellite Mapping of Precipitation (GSMaP) MWR has been mitigated by an orographic/nonorographic rainfall classification scheme (Shige et al. 2013, 2015; Taniguchi et al. 2013; Yamamoto and Shige 2015). The orographic/nonorographic rainfall classification scheme is developed on the basis of orographically forced upward vertical motion and the convergence of surface moisture flux estimated from ancillary data. Lookup tables derived from orographic precipitation profiles are used to estimate rainfall for an orographic rainfall pixel, whereas those derived from original precipitation profiles are used to estimate rainfall for a nonorographic rainfall pixel. The orographic/nonorographic rainfall classification scheme has been used by the version of GSMaP products, which are available in near real time (about 4 h after observation) via the Internet (http://sharaku.eorc.jaxa.jp/GSMaP/index.htm). The current version of GSMaP MWR algorithm with the orographic/nonorographic rainfall classification scheme improves rainfall estimation over the entire tropical region, but there is still room for improvement. In this talk, further improvement of orographic rainfall retrievals will be shown.

  15. An improved distance matrix computation algorithm for multicore clusters.

    PubMed

    Al-Neama, Mohammed W; Reda, Naglaa M; Ghaleb, Fayed F M

    2014-01-01

    Distance matrix has diverse usage in different research areas. Its computation is typically an essential task in most bioinformatics applications, especially in multiple sequence alignment. The gigantic explosion of biological sequence databases leads to an urgent need for accelerating these computations. DistVect algorithm was introduced in the paper of Al-Neama et al. (in press) to present a recent approach for vectorizing distance matrix computing. It showed an efficient performance in both sequential and parallel computing. However, the multicore cluster systems, which are available now, with their scalability and performance/cost ratio, meet the need for more powerful and efficient performance. This paper proposes DistVect1 as highly efficient parallel vectorized algorithm with high performance for computing distance matrix, addressed to multicore clusters. It reformulates DistVect1 vectorized algorithm in terms of clusters primitives. It deduces an efficient approach of partitioning and scheduling computations, convenient to this type of architecture. Implementations employ potential of both MPI and OpenMP libraries. Experimental results show that the proposed method performs improvement of around 3-fold speedup upon SSE2. Further it also achieves speedups more than 9 orders of magnitude compared to the publicly available parallel implementation utilized in ClustalW-MPI. PMID:25013779

  16. An improved algorithm of fiber tractography demonstrates postischemic cerebral reorganization

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-dong; Lu, Jie; Yao, Li; Li, Kun-cheng; Zhao, Xiao-jie

    2008-03-01

    In vivo white matter tractography by diffusion tensor imaging (DTI) accurately represents the organizational architecture of white matter in the vicinity of brain lesions and especially ischemic brain. In this study, we suggested an improved fiber tracking algorithm based on TEND, called TENDAS, for tensor deflection with adaptive stepping, which had been introduced a stepping framework for interpreting the algorithm behavior as a function of the tensor shape (linear-shaped or not) and tract history. The propagation direction at each step was given by the deflection vector. TENDAS tractography was used to examine a 17-year-old recovery patient with congenital right hemisphere artery stenosis combining with fMRI. Meaningless picture location was used as spatial working memory task in this study. We detected the shifted functional localization to the contralateral homotypic cortex and more prominent and extensive left-sided parietal and medial frontal cortical activations which were used directly as seed mask for tractography for the reconstruction of individual spatial parietal pathways. Comparing with the TEND algorithms, TENDAS shows smoother and less sharp bending characterization of white matter architecture of the parietal cortex. The results of this preliminary study were twofold. First, TENDAS may provide more adaptability and accuracy in reconstructing certain anatomical features, whereas it is very difficult to verify tractography maps of white matter connectivity in the living human brain. Second, our study indicates that combination of TENDAS and fMRI provide a unique image of functional cortical reorganization and structural modifications of postischemic spatial working memory.

  17. An Improved Algorithm for Retrieving Surface Downwelling Longwave Radiation from Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.

    2006-01-01

    Retrieving surface longwave radiation from space has been a difficult task since the surface downwelling longwave radiation (SDLW) are integrations from radiation emitted by the entire atmosphere, while those emitted from the upper atmosphere are absorbed before reaching the surface. It is particularly problematic when thick clouds are present since thick clouds will virtually block all the longwave radiation from above, while satellites observe atmosphere emissions mostly from above the clouds. Zhou and Cess developed an algorithm for retrieving SDLW based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for areas that were covered with ice clouds. An improved version of the algorithm was developed that prevents the large errors in the SDLW at low water vapor amounts. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths measured from the Cloud and the Earth's Radiant Energy System (CERES) satellites to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for the Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing. It will be incorporated in the CERES project as one of the empirical surface radiation algorithms.

  18. Significant improvement in IR surface-temperature measurements

    SciTech Connect

    Briles, S.D.; Bennett, G.A.; Larkin, T.H.; Worcester, P.

    1989-06-01

    Obtaining infrared (IR) surface-temperature measurements of miniature square targets on the order of 1.6 mm with a spatial resolution of 15 ..mu..m has recently become possible using the Barnes Engineering Computherm System, but the accuracy and precision of the measurements have been limited. The objective of this work is to provide a calibration procedure that will improve by a factor of 8 the accuracy and precision of the two-dimensional temperature measurement. The IR microscope detects energy emitted by the target and displays it as a radiance image. Heating the target to two known temperatures permits calculation of the target emissivity using the radiances at each pixel in the two-dimensional field. An error is induced in the emissivity calculation by substituting the thermal-well temperature for the known target surface temperature. At the same time, the radiance image is distorted by two functions that affect the measurement accuracy. The precision of the instrument is altered by a random noise field function. The noise functions were investigated to determine whether they were added to or multiplied by the radiance equation. A plot of image-radiance means shows the same trends as the added noise functions suggested by the prediction. Correction of the induced distortions improved the accuracy noticeably. Further improvement in the accuracy is accomplished by using a syringe thermocouple to measure the actual surface temperatures used for the emissivity calculations. Investigation of the random noise field shows that it is zero-mean and Gaussian in nature. We can therefore average images over time to improve the precision. 9 refs., 12 figs., 1 tab.

  19. Creating a Middle Grades Environment that Significantly Improves Student Achievement

    ERIC Educational Resources Information Center

    L'Esperance, Mark E.; Lenker, Ethan; Bullock, Ann; Lockamy, Becky; Mason, Cathy

    2013-01-01

    This article offers an overview of the framework that Sampson County Public Schools (North Carolina) used to critically reflect on the current state of their middle grades schools. The article also highlights the changes that resulted from the district-wide analysis and the ways in which these changes led to a significant increase in the academic…

  20. Improved Bat algorithm for the detection of myocardial infarction.

    PubMed

    Kora, Padmavathi; Kalva, Sri Ramakrishna

    2015-01-01

    The medical practitioners study the electrical activity of the human heart in order to detect heart diseases from the electrocardiogram (ECG) of the heart patients. A myocardial infarction (MI) or heart attack is a heart disease, that occurs when there is a block (blood clot) in the pathway of one or more coronary blood vessels (arteries) that supply blood to the heart muscle. The abnormalities in the heart can be identified by the changes in the ECG signal. The first step in the detection of MI is Preprocessing of ECGs which removes noise by using filters. Feature extraction is the next key process in detecting the changes in the ECG signals. This paper presents a method for extracting key features from each cardiac beat using Improved Bat algorithm. Using this algorithm best features are extracted, then these best (reduced) features are applied to the input of the neural network classifier. It has been observed that the performance of the classifier is improved with the help of the optimized features. PMID:26558169

  1. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  2. A clinical algorithm for triaging patients with significant lymphadenopathy in primary health care settings in Sudan

    PubMed Central

    El Hag, Imad A.; Elsiddig, Kamal E.; Elsafi, Mohamed E.M.O; Elfaki, Mona E.E.; Musa, Ahmed M.; Musa, Brima Y.; Elhassan, Ahmed M.

    2013-01-01

    Abstract Background Tuberculosis is a major health problem in developing countries. The distinction between tuberculous lymphadenitis, non-specific lymphadenitis and malignant lymph node enlargement has to be made at primary health care levels using easy, simple and cheap methods. Objective To develop a reliable clinical algorithm for primary care settings to triage cases of non-specific, tuberculous and malignant lymphadenopathies. Methods Calculation of the odd ratios (OR) of the chosen predictor variables was carried out using logistic regression. The numerical score values of the predictor variables were weighed against their respective OR. The performance of the score was evaluated by the ROC (Receiver Operator Characteristic) curve. Results Four predictor variables; Mantoux reading, erythrocytes sedimentation rate (ESR), nocturnal fever and discharging sinuses correlated significantly with TB diagnosis and were included in the reduced model to establish score A. For score B, the reduced model included Mantoux reading, ESR, lymph-node size and lymph-node number as predictor variables for malignant lymph nodes. Score A ranged 0 to 12 and a cut-off point of 6 gave a best sensitivity and specificity of 91% and 90% respectively, whilst score B ranged -3 to 8 and a cut-off point of 3 gave a best sensitivity and specificity of 83% and 76% respectively. The calculated area under the ROC curve was 0.964 (95% CI, 0.949 – 0.980) and -0.856 (95% CI, 0.787 - 0.925) for scores A and B respectively, indicating good performance. Conclusion The developed algorithm can efficiently triage cases with tuberculous and malignant lymphadenopathies for treatment or referral to specialised centres for further work-up.

  3. Accuracy of pitch matching significantly improved by live voice model.

    PubMed

    Granot, Roni Y; Israel-Kolatt, Rona; Gilboa, Avi; Kolatt, Tsafrir

    2013-05-01

    Singing is, undoubtedly, the most fundamental expression of our musical capacity, yet an estimated 10-15% of Western population sings "out-of-tune (OOT)." Previous research in children and adults suggests, albeit inconsistently, that imitating a human voice can improve pitch matching. In the present study, we focus on the potentially beneficial effects of the human voice and especially the live human voice. Eighteen participants varying in their singing abilities were required to imitate in singing a set of nine ascending and descending intervals presented to them in five different randomized blocked conditions: live piano, recorded piano, live voice using optimal voice production, recorded voice using optimal voice production, and recorded voice using artificial forced voice production. Pitch and interval matching in singing were much more accurate when participants repeated sung intervals as compared with intervals played to them on the piano. The advantage of the vocal over the piano stimuli was robust and emerged clearly regardless of whether piano tones were played live and in full view or were presented via recording. Live vocal stimuli elicited higher accuracy than recorded vocal stimuli, especially when the recorded vocal stimuli were produced in a forced vocal production. Remarkably, even those who would be considered OOT singers on the basis of their performance when repeating piano tones were able to pitch match live vocal sounds, with deviations well within the range of what is considered accurate singing (M=46.0, standard deviation=39.2 cents). In fact, those participants who were most OOT gained the most from the live voice model. Results are discussed in light of the dual auditory-motor encoding of pitch analogous to that found in speech. PMID:23528675

  4. Low Dose Vaporized Cannabis Significantly Improves Neuropathic Pain

    PubMed Central

    Wilsey, Barth; Marcotte, Thomas D.; Deutsch, Reena; Gouaux, Ben; Sakai, Staci; Donaghe, Haylee

    2013-01-01

    We conducted a double-blind, placebo-controlled, crossover study evaluating the analgesic efficacy of vaporized cannabis in subjects, the majority of whom were experiencing neuropathic pain despite traditional treatment. Thirty-nine patients with central and peripheral neuropathic pain underwent a standardized procedure for inhaling either medium dose (3.53%), low dose (1.29%), or placebo cannabis with the primary outcome being VAS pain intensity. Psychoactive side-effects, and neuropsychological performance were also evaluated. Mixed effects regression models demonstrated an analgesic response to vaporized cannabis. There was no significant difference between the two active dose groups’ results (p>0.7). The number needed to treat (NNT) to achieve 30% pain reduction was 3.2 for placebo vs. low dose, 2.9 for placebo vs. medium dose, and 25 for medium vs. low dose. As these NNT are comparable to those of traditional neuropathic pain medications, cannabis has analgesic efficacy with the low dose being, for all intents and purposes, as effective a pain reliever as the medium dose. Psychoactive effects were minimal and well-tolerated, and neuropsychological effects were of limited duration and readily reversible within 1–2 hours. Vaporized cannabis, even at low doses, may present an effective option for patients with treatment-resistant neuropathic pain. PMID:23237736

  5. Direct ChIP-Seq significance analysis improves target prediction

    PubMed Central

    2015-01-01

    Background Chromatin immunoprecipitation followed by sequencing of protein-bound DNA fragments (ChIP-Seq) is an effective high-throughput methodology for the identification of context specific DNA fragments that are bound by specific proteins in vivo. Despite significant progress in the bioinformatics analysis of this genome-scale data, a number of challenges remain as technology-dependent biases, including variable target accessibility and mappability, sequence-dependent variability, and non-specific binding affinity must be accounted for. Results and discussion We introduce a nonparametric method for scoring consensus regions of aligned immunoprecipitated DNA fragments when appropriate control experiments are available. Our method uses local models for null binding; these are necessary because binding prediction scores based on global models alone fail to properly account for specialized features of genomic regions and chance pull downs of specific DNA fragments, thus disproportionally rewarding some genomic regions and decreasing prediction accuracy. We make no assumptions about the structure or amplitude of bound peaks, yet we show that our method outperforms leading methods developed using either global or local null hypothesis models for random binding. We test prediction performance by comparing analyses of ChIP-seq, ChIP-chip, motif-based binding-site prediction, and shRNA assays, showing high reproducibility, binding-site enrichment in predicted target regions, and functional regulation of predicted targets. Conclusions Given appropriate controls, a direct nonparametric method for identifying transcription-factor targets from ChIP-Seq assays may lead to both higher sensitivity and higher specificity, and should be preferred or used in conjunction with methods that use parametric models for null binding. PMID:26040656

  6. Improvement of Service Searching Algorithm in the JVO Portal Site

    NASA Astrophysics Data System (ADS)

    Eguchi, S.; Shirasak, Y.; Komiya, Y.; Ohishi, M.; Mizumoto, Y.; Ishihara, Y.; Tsutsumi, J.; Hiyama, T.; Nakamoto, H.; Sakamoto, M.

    2012-09-01

    The Virtual Observatory (VO) consists of a huge amount of astronomical databases which contain both of theoretical and observational data obtained with various methods, telescopes, and instruments. Since VO provides raw and processed observational data, astronomers can concentrate themselves on their scientific interests without awareness of instruments; all they have to know is which service provides their interested data. On the other hand, services on the VO system will be better used if queries can be made by means of telescopes, wavelengths, and object types; currently it is difficult for newcomers to find desired ones. We have recently started a project towards improving the data service functionality and usability on the Japanese VO (JVO) portal site. We are now working on implementation of a function to automatically classify all services on VO in terms of telescopes and instruments without referring to the facility and instrument keywords, which are not always filled in most cases. In the paper, we report a new algorithm towards constructing the facility and instrument keywords from other information of a service, and discuss its effectiveness. We also propose a new user interface of the portal site with this algorithm.

  7. Improve online boosting algorithm from self-learning cascade classifier

    NASA Astrophysics Data System (ADS)

    Luo, Dapeng; Sang, Nong; Huang, Rui; Tong, Xiaojun

    2010-04-01

    Online boosting algorithm has been used in many vision-related applications, such as object detection. However, in order to obtain good detection result, combining a large number of weak classifiers into a strong classifier is required. And those weak classifiers must be updated and improved online. So the training and detection speed will be reduced inevitably. This paper proposes a novel online boosting based learning method, called self-learning cascade classifier. Cascade decision strategy is integrated with the online boosting procedure. The resulting system contains enough number of weak classifiers while keeping computation cost low. The cascade structure is learned and updated online. And the structure complexity can be increased adaptively when detection task is more difficult. Moreover, most of new samples are labeled by tracking automatically. This can greatly reduce the effort by labeler. We present experimental results that demonstrate the efficient and high detection rate of the method.

  8. A morphological algorithm for improving radio-frequency interference detection

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; van de Gronde, J. J.; Roerdink, J. B. T. M.

    2012-03-01

    A technique is described that is used to improve the detection of radio-frequency interference in astronomical radio observatories. It is applied on a two-dimensional interference mask after regular detection in the time-frequency domain with existing techniques. The scale-invariant rank (SIR) operator is defined, which is a one-dimensional mathematical morphology technique that can be used to find adjacent intervals in the time or frequency domain that are likely to be affected by RFI. The technique might also be applicable in other areas in which morphological scale-invariant behaviour is desired, such as source detection. A new algorithm is described, that is shown to perform quite well, has linear time complexity and is fast enough to be applied in modern high resolution observatories. It is used in the default pipeline of the LOFAR observatory.

  9. An Efficient and Configurable Preprocessing Algorithm to Improve Stability Analysis.

    PubMed

    Sesia, Ilaria; Cantoni, Elena; Cernigliaro, Alice; Signorile, Giovanna; Fantino, Gianluca; Tavella, Patrizia

    2016-04-01

    The Allan variance (AVAR) is widely used to measure the stability of experimental time series. Specifically, AVAR is commonly used in space applications such as monitoring the clocks of the global navigation satellite systems (GNSSs). In these applications, the experimental data present some peculiar aspects which are not generally encountered when the measurements are carried out in a laboratory. Space clocks' data can in fact present outliers, jumps, and missing values, which corrupt the clock characterization. Therefore, an efficient preprocessing is fundamental to ensure a proper data analysis and improve the stability estimation performed with the AVAR or other similar variances. In this work, we propose a preprocessing algorithm and its implementation in a robust software code (in MATLAB language) able to deal with time series of experimental data affected by nonstationarities and missing data; our method is properly detecting and removing anomalous behaviors, hence making the subsequent stability analysis more reliable. PMID:26540679

  10. Protein-fold recognition using an improved single-source K diverse shortest paths algorithm.

    PubMed

    Lhota, John; Xie, Lei

    2016-04-01

    Protein structure prediction, when construed as a fold recognition problem, is one of the most important applications of similarity search in bioinformatics. A new protein-fold recognition method is reported which combines a single-source K diverse shortest path (SSKDSP) algorithm with Enrichment of Network Topological Similarity (ENTS) algorithm to search a graphic feature space generated using sequence similarity and structural similarity metrics. A modified, more efficient SSKDSP algorithm is developed to improve the performance of graph searching. The new implementation of the SSKDSP algorithm empirically requires 82% less memory and 61% less time than the current implementation, allowing for the analysis of larger, denser graphs. Furthermore, the statistical significance of fold ranking generated from SSKDSP is assessed using ENTS. The reported ENTS-SSKDSP algorithm outperforms original ENTS that uses random walk with restart for the graph search as well as other state-of-the-art protein structure prediction algorithms HHSearch and Sparks-X, as evaluated by a benchmark of 600 query proteins. The reported methods may easily be extended to other similarity search problems in bioinformatics and chemoinformatics. The SSKDSP software is available at http://compsci.hunter.cuny.edu/~leixie/sskdsp.html. Proteins 2016; 84:467-472. © 2016 Wiley Periodicals, Inc. PMID:26800480

  11. An improved formalism for quantum computation based on geometric algebra—case study: Grover's search algorithm

    NASA Astrophysics Data System (ADS)

    Chappell, James M.; Iqbal, Azhar; Lohe, M. A.; von Smekal, Lorenz; Abbott, Derek

    2013-04-01

    The Grover search algorithm is one of the two key algorithms in the field of quantum computing, and hence it is desirable to represent it in the simplest and most intuitive formalism possible. We show firstly, that Clifford's geometric algebra, provides a significantly simpler representation than the conventional bra-ket notation, and secondly, that the basis defined by the states of maximum and minimum weight in the Grover search space, allows a simple visualization of the Grover search analogous to the precession of a spin-{1/2} particle. Using this formalism we efficiently solve the exact search problem, as well as easily representing more general search situations. We do not claim the development of an improved algorithm, but show in a tutorial paper that geometric algebra provides extremely compact and elegant expressions with improved clarity for the Grover search algorithm. Being a key algorithm in quantum computing and one of the most studied, it forms an ideal basis for a tutorial on how to elucidate quantum operations in terms of geometric algebra—this is then of interest in extending the applicability of geometric algebra to more complicated problems in fields of quantum computing, quantum decision theory, and quantum information.

  12. [An optimal predicting method based on improved genetic algorithm embedded in neural network and its application to peritoneal dialysis].

    PubMed

    Zhang, Mei; Hu, Yueming; Wang, Tao; Zhu, Jinhui

    2009-12-01

    This paper addresses the predicting problem of peritoneal fluid absorption rate(PFAR). An innovative predicting model was developed, which employed the improved genetic algorithm embedded in neural network for predicting the important PFAR index in the peritoneal dialysis treatment process of renal failure. The significance of PFAR and the complexity of dialysis process were analyzed. The improved genetic algorithm was used for defining the initial weight and bias of neural network, and then the neural network was used for finding out the optimal predicting model of PFAR. This method utilizes the global search capability of genetic algorithm and the local search advantage of neural network completely. For the purpose of showing the validity of the model, the improved optimal predicting model is compared with the standard hybrid method of genetic algorithm and neural network. The simulation results show that the predicting accuracy of the improved optimal neural network is greatly improved and the learning process needs less time. PMID:20095466

  13. An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning

    PubMed Central

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-01-01

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197

  14. Improvement of algorithms for digital real-time n-γ discrimination

    NASA Astrophysics Data System (ADS)

    Wang, Song; Xu, Peng; Lu, Chang-Bing; Huo, Yong-Gang; Zhang, Jun-Jie

    2016-02-01

    Three algorithms (the Charge Comparison Method, n-γ Model Analysis and the Centroid Algorithm) have been revised to improve their accuracy and broaden the scope of applications to real-time digital n-γ discrimination. To evaluate the feasibility of the revised algorithms, a comparison between the improved and original versions of each is presented. To select an optimal real-time discrimination algorithm from these six algorithms (improved and original), the figure-of-merit (FOM), Peak-Threshold Ratio (PTR), Error Probability (EP) and Simulation Time (ST) for each were calculated to obtain a quantitatively comprehensive assessment of their performance. The results demonstrate that the improved algorithms have a higher accuracy, with an average improvement of 10% in FOM, 95% in PTR and 25% in EP, but all the STs are increased. Finally, the Adjustable Centroid Algorithm (ACA) is selected as the optimal algorithm for real-time digital n-γ discrimination.

  15. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  16. Further development of an improved altimeter wind speed algorithm

    NASA Technical Reports Server (NTRS)

    Chelton, Dudley B.; Wentz, Frank J.

    1986-01-01

    A previous altimeter wind speed retrieval algorithm was developed on the basis of wind speeds in the limited range from about 4 to 14 m/s. In this paper, a new approach which gives a wind speed model function applicable over the range 0 to 21 m/s is used. The method is based on comparing 50 km along-track averages of the altimeter normalized radar cross section measurements with neighboring off-nadir scatterometer wind speed measurements. The scatterometer winds are constructed from 100 km binned measurements of radar cross section and are located approximately 200 km from the satellite subtrack. The new model function agrees very well with earlier versions up to wind speeds of 14 m/s, but differs significantly at higher wind speeds. The relevance of these results to the Geosat altimeter launched in March 1985 is discussed.

  17. Improvements and Extensions for Joint Polar Satellite System Algorithms

    NASA Astrophysics Data System (ADS)

    Grant, K. D.; Feeley, J. H.; Miller, S. W.; Jamilkowski, M. L.

    2014-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS replaced the afternoon orbit component and ground processing system of the old POES system managed by the NOAA. JPSS satellites will carry sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for the JPSS is the Common Ground System (CGS), and provides command, control, and communications (C3), data processing and product delivery. CGS's data processing capability processes the data from the JPSS satellites to provide environmental data products (including Sensor Data Records (SDRs) and Environmental Data Records (EDRs)) to the NOAA Satellite Operations Facility. The first satellite in the JPSS constellation, known as the Suomi National Polar-orbiting Partnership (S-NPP) satellite, was launched on 28 October 2011. CGS is currently processing and delivering SDRs and EDRs for S-NPP and will continue through the lifetime of the JPSS program. The EDRs for S-NPP are currently undergoing an extensive Calibration and Validation (Cal/Val) campaign. Changes identified by the Cal/Val campaign are coming available for implementation into the operational system in support of both S-NPP and JPSS-1 (scheduled for launch in 2017). Some of these changes will be available in time to update the S-NPP algorithm baseline, while others will become operational just prior to JPSS-1 launch. In addition, new capabilities, such as higher spectral and spatial resolution, will be exercised on JPSS-1. This paper will describe changes to current algorithms and products as a result of the Cal/Val campaign and related initiatives for improved capabilities. Improvements include Cross Track Infrared Sounder high spectral

  18. Utilization of advanced clutter suppression algorithms for improved standoff detection and identification of radionuclide threats

    NASA Astrophysics Data System (ADS)

    Cosofret, Bogdan R.; Shokhirev, Kirill; Mulhall, Phil; Payne, David; Harris, Bernard

    2014-05-01

    Technology development efforts seek to increase the capability of detection systems in low Signal-to-Noise regimes encountered in both portal and urban detection applications. We have recently demonstrated significant performance enhancement in existing Advanced Spectroscopic Portals (ASP), Standoff Radiation Detection Systems (SORDS) and handheld isotope identifiers through the use of new advanced detection and identification algorithms. The Poisson Clutter Split (PCS) algorithm is a novel approach for radiological background estimation that improves the detection and discrimination capability of medium resolution detectors. The algorithm processes energy spectra and performs clutter suppression, yielding de-noised gamma-ray spectra that enable significant enhancements in detection and identification of low activity threats with spectral target recognition algorithms. The performance is achievable at the short integration times (0.5 - 1 second) necessary for operation in a high throughput and dynamic environment. PCS has been integrated with ASP, SORDS and RIID units and evaluated in field trials. We present a quantitative analysis of algorithm performance against data collected by a range of systems in several cluttered environments (urban and containerized) with embedded check sources. We show that the algorithm achieves a high probability of detection/identification with low false alarm rates under low SNR regimes. For example, utilizing only 4 out of 12 NaI detectors currently available within an ASP unit, PCS processing demonstrated Pd,ID > 90% at a CFAR (Constant False Alarm Rate) of 1 in 1000 occupancies against weak activity (7 - 8μCi) and shielded sources traveling through the portal at 30 mph. This vehicle speed is a factor of 6 higher than was previously possible and results in significant increase in system throughput and overall performance.

  19. A clustering routing algorithm based on improved ant colony clustering for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Xiao, Xiaoli; Li, Yang

    Because of real wireless sensor network node distribution uniformity, this paper presents a clustering strategy based on the ant colony clustering algorithm (ACC-C). To reduce the energy consumption of the head near the base station and the whole network, The algorithm uses ant colony clustering on non-uniform clustering. The improve route optimal degree is presented to evaluate the performance of the chosen route. Simulation results show that, compared with other algorithms, like the LEACH algorithm and the improve particle cluster kind of clustering algorithm (PSC - C), the proposed approach is able to keep away from the node with less residual energy, which can improve the life of networks.

  20. Overlay improvements using a real time machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Schmitt-Weaver, Emil; Kubis, Michael; Henke, Wolfgang; Slotboom, Daan; Hoogenboom, Tom; Mulkens, Jan; Coogans, Martyn; ten Berge, Peter; Verkleij, Dick; van de Mast, Frank

    2014-04-01

    While semiconductor manufacturing is moving towards the 14nm node using immersion lithography, the overlay requirements are tightened to below 5nm. Next to improvements in the immersion scanner platform, enhancements in the overlay optimization and process control are needed to enable these low overlay numbers. Whereas conventional overlay control methods address wafer and lot variation autonomously with wafer pre exposure alignment metrology and post exposure overlay metrology, we see a need to reduce these variations by correlating more of the TWINSCAN system's sensor data directly to the post exposure YieldStar metrology in time. In this paper we will present the results of a study on applying a real time control algorithm based on machine learning technology. Machine learning methods use context and TWINSCAN system sensor data paired with post exposure YieldStar metrology to recognize generic behavior and train the control system to anticipate on this generic behavior. Specific for this study, the data concerns immersion scanner context, sensor data and on-wafer measured overlay data. By making the link between the scanner data and the wafer data we are able to establish a real time relationship. The result is an inline controller that accounts for small changes in scanner hardware performance in time while picking up subtle lot to lot and wafer to wafer deviations introduced by wafer processing.

  1. Simple algorithm for improved security in the FDDI protocol

    NASA Astrophysics Data System (ADS)

    Lundy, G. M.; Jones, Benjamin

    1993-02-01

    We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.

  2. Improvement of unsupervised texture classification based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Togami, Yuuki; Arai, Kohei

    2004-11-01

    At the previous conference, the authors are proposed a new unsupervised texture classification method based on the genetic algorithms (GA). In the method, the GA are employed to determine location and size of the typical textures in the target image. The proposed method consists of the following procedures: 1) the determination of the number of classification category; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed; 6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. However, this method has not been automated because it requires not only target image but also the number of categories for classification. In this paper, we describe some improvement for implementation of automated texture classification. Some experiments are conducted to evaluate classification capability of the proposed method by using images from Brodatz's photo album and actual airborne multispectral scanner. The experimental results show that the proposed method can select appropriate texture samples and can provide reasonable classification results.

  3. Improving the Energy Market: Algorithms, Market Implications, and Transmission Switching

    NASA Astrophysics Data System (ADS)

    Lipka, Paula Ann

    This dissertation aims to improve ISO operations through a better real-time market solution algorithm that directly considers both real and reactive power, finds a feasible Alternating Current Optimal Power Flow solution, and allows for solving transmission switching problems in an AC setting. Most of the IEEE systems do not contain any thermal limits on lines, and the ones that do are often not binding. Chapter 3 modifies the thermal limits for the IEEE systems to create new, interesting test cases. Algorithms created to better solve the power flow problem often solve the IEEE cases without line limits. However, one of the factors that makes the power flow problem hard is thermal limits on the lines. The transmission networks in practice often have transmission lines that become congested, and it is unrealistic to ignore line limits. Modifying the IEEE test cases makes it possible for other researchers to be able to test their algorithms on a setup that is closer to the actual ISO setup. This thesis also examines how to convert limits given on apparent power---as is in the case in the Polish test systems---to limits on current. The main consideration in setting line limits is temperature, which linearly relates to current. Setting limits on real or apparent power is actually a proxy for using the limits on current. Therefore, Chapter 3 shows how to convert back to the best physical representation of line limits. A sequential linearization of the current-voltage formulation of the Alternating Current Optimal Power Flow (ACOPF) problem is used to find an AC-feasible generator dispatch. In this sequential linearization, there are parameters that are set to the previous optimal solution. Additionally, to improve accuracy of the Taylor series approximations that are used, the movement of the voltage is restricted. The movement of the voltage is allowed to be very large at the first iteration and is restricted further on each subsequent iteration, with the restriction

  4. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  5. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  6. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  7. Improving warm rain estimation in the PERSIANN-CCS satellite-based retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.

    2015-12-01

    The Precipitation Estimation from remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) is one of the algorithms being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to estimate precipitation at 0.04 lat-long scale every 30-minute. PERSIANN-CCS extracts features from infrared cloud image segmentation from three brightness temperature thresholds (220K, 235K, and 253K). Warm raining clouds with brightness temperature higher than 253K are not covered from the current algorithm. To improve rain detection from warm rain, in this study, the cloud image segmentation threshold to cover warmer clouds is extended from 253K to 300K. Several other temperature thresholds between 253K and 300K were also examined. K-means cluster algorithm was used to classify extracted image features to 400 groups. Rainfall rates from each cluster were retrained using radar rainfall measurements. Case studies were carried out over CONUS to investigate the ability to improve detection of warm rainfall from segmentation and image classification using warmer temperature thresholds. Satellite image and radar rainfall data in both summer and winter seasons were used in the experiments in year 2012 as a training data. Overall results show that rain detection from warm clouds is significantly improved. However, it also shows that the false rain detection is also relatively increased when the segmentation temperature is increased.

  8. Some Improvements on Signed Window Algorithms for Scalar Multiplications in Elliptic Curve Cryptosystems

    NASA Technical Reports Server (NTRS)

    Vo, San C.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Scalar multiplication is an essential operation in elliptic curve cryptosystems because its implementation determines the speed and the memory storage requirements. This paper discusses some improvements on two popular signed window algorithms for implementing scalar multiplications of an elliptic curve point - Morain-Olivos's algorithm and Koyarna-Tsuruoka's algorithm.

  9. A biomimetic algorithm for the improved detection of microarray features

    NASA Astrophysics Data System (ADS)

    Nicolau, Dan V., Jr.; Nicolau, Dan V.; Maini, Philip K.

    2007-02-01

    One the major difficulties of microarray technology relate to the processing of large and - importantly - error-loaded images of the dots on the chip surface. Whatever the source of these errors, those obtained in the first stage of data acquisition - segmentation - are passed down to the subsequent processes, with deleterious results. As it has been demonstrated recently that biological systems have evolved algorithms that are mathematically efficient, this contribution attempts to test an algorithm that mimics a bacterial-"patented" algorithm for the search of available space and nutrients to find, "zero-in" and eventually delimitate the features existent on the microarray surface.

  10. An improved watershed image segmentation algorithm combining with a new entropy evaluation criterion

    NASA Astrophysics Data System (ADS)

    Deng, Tingquan; Li, Yanchao

    2013-03-01

    An improved watershed image segmentation algorithm is proposed to solve the problem of over-segmentation by classical watershed algorithm. The new algorithm combines region growing with classical watershed algorithm. The key to region growing lies in choosing a growing threshold to reach a desired result of image segmentation. An entropy evaluation criterion is constructed to determine the optimal threshold. Considering the entropy evaluation criterion as an objective function, the particle swarm optimization algorithm is employed to search global optimization of the objective function. Experimental results show that this new algorithm can solve the problem of over-segmentation effectively.

  11. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    PubMed Central

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  12. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    PubMed

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  13. Mapping Soil Properties of Africa at 250 m Resolution: Random Forests Significantly Improve Current Predictions

    PubMed Central

    Hengl, Tomislav; Heuvelink, Gerard B. M.; Kempen, Bas; Leenaars, Johan G. B.; Walsh, Markus G.; Shepherd, Keith D.; Sila, Andrew; MacMillan, Robert A.; Mendes de Jesus, Jorge; Tamene, Lulseged; Tondoh, Jérôme E.

    2015-01-01

    80% of arable land in Africa has low soil fertility and suffers from physical soil problems. Additionally, significant amounts of nutrients are lost every year due to unsustainable soil management practices. This is partially the result of insufficient use of soil management knowledge. To help bridge the soil information gap in Africa, the Africa Soil Information Service (AfSIS) project was established in 2008. Over the period 2008–2014, the AfSIS project compiled two point data sets: the Africa Soil Profiles (legacy) database and the AfSIS Sentinel Site database. These data sets contain over 28 thousand sampling locations and represent the most comprehensive soil sample data sets of the African continent to date. Utilizing these point data sets in combination with a large number of covariates, we have generated a series of spatial predictions of soil properties relevant to the agricultural management—organic carbon, pH, sand, silt and clay fractions, bulk density, cation-exchange capacity, total nitrogen, exchangeable acidity, Al content and exchangeable bases (Ca, K, Mg, Na). We specifically investigate differences between two predictive approaches: random forests and linear regression. Results of 5-fold cross-validation demonstrate that the random forests algorithm consistently outperforms the linear regression algorithm, with average decreases of 15–75% in Root Mean Squared Error (RMSE) across soil properties and depths. Fitting and running random forests models takes an order of magnitude more time and the modelling success is sensitive to artifacts in the input data, but as long as quality-controlled point data are provided, an increase in soil mapping accuracy can be expected. Results also indicate that globally predicted soil classes (USDA Soil Taxonomy, especially Alfisols and Mollisols) help improve continental scale soil property mapping, and are among the most important predictors. This indicates a promising potential for transferring pedological

  14. Mapping Soil Properties of Africa at 250 m Resolution: Random Forests Significantly Improve Current Predictions.

    PubMed

    Hengl, Tomislav; Heuvelink, Gerard B M; Kempen, Bas; Leenaars, Johan G B; Walsh, Markus G; Shepherd, Keith D; Sila, Andrew; MacMillan, Robert A; Mendes de Jesus, Jorge; Tamene, Lulseged; Tondoh, Jérôme E

    2015-01-01

    80% of arable land in Africa has low soil fertility and suffers from physical soil problems. Additionally, significant amounts of nutrients are lost every year due to unsustainable soil management practices. This is partially the result of insufficient use of soil management knowledge. To help bridge the soil information gap in Africa, the Africa Soil Information Service (AfSIS) project was established in 2008. Over the period 2008-2014, the AfSIS project compiled two point data sets: the Africa Soil Profiles (legacy) database and the AfSIS Sentinel Site database. These data sets contain over 28 thousand sampling locations and represent the most comprehensive soil sample data sets of the African continent to date. Utilizing these point data sets in combination with a large number of covariates, we have generated a series of spatial predictions of soil properties relevant to the agricultural management--organic carbon, pH, sand, silt and clay fractions, bulk density, cation-exchange capacity, total nitrogen, exchangeable acidity, Al content and exchangeable bases (Ca, K, Mg, Na). We specifically investigate differences between two predictive approaches: random forests and linear regression. Results of 5-fold cross-validation demonstrate that the random forests algorithm consistently outperforms the linear regression algorithm, with average decreases of 15-75% in Root Mean Squared Error (RMSE) across soil properties and depths. Fitting and running random forests models takes an order of magnitude more time and the modelling success is sensitive to artifacts in the input data, but as long as quality-controlled point data are provided, an increase in soil mapping accuracy can be expected. Results also indicate that globally predicted soil classes (USDA Soil Taxonomy, especially Alfisols and Mollisols) help improve continental scale soil property mapping, and are among the most important predictors. This indicates a promising potential for transferring pedological

  15. Improved Clonal Selection Algorithm Combined with Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Gao, Shangce; Wang, Wei; Dai, Hongwei; Li, Fangjia; Tang, Zheng

    Both the clonal selection algorithm (CSA) and the ant colony optimization (ACO) are inspired by natural phenomena and are effective tools for solving complex problems. CSA can exploit and explore the solution space parallely and effectively. However, it can not use enough environment feedback information and thus has to do a large redundancy repeat during search. On the other hand, ACO is based on the concept of indirect cooperative foraging process via secreting pheromones. Its positive feedback ability is nice but its convergence speed is slow because of the little initial pheromones. In this paper, we propose a pheromone-linker to combine these two algorithms. The proposed hybrid clonal selection and ant colony optimization (CSA-ACO) reasonably utilizes the superiorities of both algorithms and also overcomes their inherent disadvantages. Simulation results based on the traveling salesman problems have demonstrated the merit of the proposed algorithm over some traditional techniques.

  16. An Improved Recovery Algorithm for Decayed AES Key Schedule Images

    NASA Astrophysics Data System (ADS)

    Tsow, Alex

    A practical algorithm that recovers AES key schedules from decayed memory images is presented. Halderman et al. [1] established this recovery capability, dubbed the cold-boot attack, as a serious vulnerability for several widespread software-based encryption packages. Our algorithm recovers AES-128 key schedules tens of millions of times faster than the original proof-of-concept release. In practice, it enables reliable recovery of key schedules at 70% decay, well over twice the decay capacity of previous methods. The algorithm is generalized to AES-256 and is empirically shown to recover 256-bit key schedules that have suffered 65% decay. When solutions are unique, the algorithm efficiently validates this property and outputs the solution for memory images decayed up to 60%.

  17. Performance of recovery time improvement algorithms for software RAIDs

    SciTech Connect

    Riegel, J.; Menon, Jai

    1996-12-31

    A software RAID is a RAID implemented purely in software running on a host computer. One problem with software RAIDs is that they do not have access to special hardware such as NVRAM. Thus, software RAIDs may need to check every parity group of an array for consistency following a host crash or power failure. This process of checking parity groups is called recovery, and results in long delays when the software RAID is restarted. In this paper, we review two algorithms to reduce this recovery time for software RAIDs: the PGS Bitmap algorithm we proposed in and the List Algorithm proposed in. We compare the performance of these two algorithms using trace-driven simulations. Our results show that the PGS Bitmap Algorithm can reduce recovery time by a factor of 12 with a response time penalty of less than 1%, or by a factor of 50 with a response time penalty of less than 2%, and a memory requirement of around 9 Kbytes. The List Algorithm can reduce recovery time by a factor of 50 but cannot achieve a response time penalty of less than 16%.

  18. An Improved Greedy Search Algorithm for the Development of a Phonetically Rich Speech Corpus

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Song; Nakamura, Satoshi

    An efficient way to develop large scale speech corpora is to collect phonetically rich ones that have high coverage of phonetic contextual units. The sentence set, usually called as the minimum set, should have small text size in order to reduce the collection cost. It can be selected by a greedy search algorithm from a large mother text corpus. With the inclusion of more and more phonetic contextual effects, the number of different phonetic contextual units increased dramatically, making the search not a trivial issue. In order to improve the search efficiency, we previously proposed a so-called least-to-most-ordered greedy search based on the conventional algorithms. This paper evaluated these algorithms in order to show their different characteristics. The experimental results showed that the least-to-most-ordered methods successfully achieved smaller objective sets at significantly less computation time, when compared with the conventional ones. This algorithm has already been applied to the development a number of speech corpora, including a large scale phonetically rich Chinese speech corpus ATRPTH which played an important role in developing our multi-language translation system.

  19. Improved LMD algorithm based on extraction of extrema of envelope curve

    NASA Astrophysics Data System (ADS)

    Song, Yuqian; Zhao, Jun; Guo, Tiantai; Kong, Ming; Wang, Yingjun; Shan, Liang

    2015-02-01

    Local mean decomposition (LMD) is a time-frequency analysis approach to deal with complex multi-frequency signal. However, as the decomposition process is sensitive to noise, there is a distinct limit when it is applied to analysis of the vibration signals of machinery with serious background noise. An improved LMD algorithm based on extracting the extrema of envelope curve is put forward to reduce the influence of high-frequency noise effectively. To verify its effect, three different de-noising methods, i.e., band-pass filter method, wavelet method and lift wavelet method are used, respectively. And the comparison result of the 4 methods shows that the proposed method has satisfactory reproducibility. Then the new algorithm is applied to real bearing signal, and experimental results show that it is effective and reliable. The method also has certain significance for the subsequent eigenvector research in intelligent fault diagnosis.

  20. A Genetic Algorithm for Learning Significant Phrase Patterns in Radiology Reports

    SciTech Connect

    Patton, Robert M; Potok, Thomas E; Beckerman, Barbara G; Treadwell, Jim N

    2009-01-01

    Radiologists disagree with each other over the characteristics and features of what constitutes a normal mammogram and the terminology to use in the associated radiology report. Recently, the focus has been on classifying abnormal or suspicious reports, but even this process needs further layers of clustering and gradation, so that individual lesions can be more effectively classified. Using a genetic algorithm, the approach described here successfully learns phrase patterns for two distinct classes of radiology reports (normal and abnormal). These patterns can then be used as a basis for automatically analyzing, categorizing, clustering, or retrieving relevant radiology reports for the user.

  1. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the

  2. Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm

    PubMed Central

    Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods. PMID:26649071

  3. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  4. A new improved artificial bee colony algorithm for ship hull form optimization

    NASA Astrophysics Data System (ADS)

    Huang, Fuxin; Wang, Lijue; Yang, Chi

    2016-04-01

    The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.

  5. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    DOE PAGESBeta

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less

  6. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    SciTech Connect

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on the performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.

  7. An Improved QRS Wave Group Detection Algorithm and Matlab Implementation

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjun

    This paper presents an algorithm using Matlab software to detect QRS wave group of MIT-BIH ECG database. First of all the noise in ECG be Butterworth filtered, and then analysis the ECG signal based on wavelet transform to detect the parameters of the principle of singularity, more accurate detection of the QRS wave group was achieved.

  8. Crossover Improvement for the Genetic Algorithm in Information Retrieval.

    ERIC Educational Resources Information Center

    Vrajitoru, Dana

    1998-01-01

    In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…

  9. Motion Cueing Algorithm Modification for Improved Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Zaychik, Kirill; Kelly, Lon C.; Houck, Jacob

    2009-01-01

    Atmospheric turbulence cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. Cardullo and Ellor initially addressed this problem by directly porting the turbulence model output to the motion system. Reid and Robinson addressed the problem by employing a parallel aircraft model, which is only stimulated by the turbulence inputs and adding a filter specially designed to pass the higher turbulence frequencies. There have been advances in motion cueing algorithm development at the Man-Machine Systems Laboratory, at SUNY Binghamton. In particular, the system used to generate turbulence cues has been studied. The Reid approach, implemented by Telban and Cardullo, was employed to augment the optimal motion cueing algorithm installed at the NASA LaRC Simulation Laboratory, driving the Visual Motion Simulator. In this implementation, the output of the primary flight channel was added to the output of the turbulence channel and then sent through a non-linear cueing filter. The cueing filter is an adaptive filter; therefore, it is not desirable for the output of the turbulence channel to be augmented by this type of filter. The likelihood of the signal becoming divergent was also an issue in this design. After testing on-site it became apparent that the architecture of the turbulence algorithm was generating unacceptable cues. As mentioned above, this cueing algorithm comprised a filter that was designed to operate at low bandwidth. Therefore, the turbulence was also filtered, augmenting the cues generated by the model. If any filtering is to be done to the turbulence, it will utilize a filter with a much higher bandwidth, above the frequencies produced by the aircraft response to turbulence. The authors have developed an implementation wherein only the signal from the primary flight channel passes through the nonlinear cueing filter. This paper discusses three

  10. Improvement and analysis of ID3 algorithm in decision-making tree

    NASA Astrophysics Data System (ADS)

    Xie, Xiao-Lan; Long, Zhen; Liao, Wen-Qi

    2015-12-01

    For the cooperative system under development, it needs to use the spatial analysis and relative technology concerning data mining in order to carry out the detection of the subject conflict and redundancy, while the ID3 algorithm is an important data mining. Due to the traditional ID3 algorithm in the decision-making tree towards the log part is rather complicated, this paper obtained a new computational formula of information gain through the optimization of algorithm of the log part. During the experiment contrast and theoretical analysis, it is found that IID3 (Improved ID3 Algorithm) algorithm owns higher calculation efficiency and accuracy and thus worth popularizing.

  11. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  12. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-01

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  13. An Effective Intrusion Detection Algorithm Based on Improved Semi-supervised Fuzzy Clustering

    NASA Astrophysics Data System (ADS)

    Li, Xueyong; Zhang, Baojian; Sun, Jiaxia; Yan, Shitao

    An algorithm for intrusion detection based on improved evolutionary semi- supervised fuzzy clustering is proposed which is suited for situation that gaining labeled data is more difficulty than unlabeled data in intrusion detection systems. The algorithm requires a small number of labeled data only and a large number of unlabeled data and class labels information provided by labeled data is used to guide the evolution process of each fuzzy partition on unlabeled data, which plays the role of chromosome. This algorithm can deal with fuzzy label, uneasily plunges locally optima and is suited to implement on parallel architecture. Experiments show that the algorithm can improve classification accuracy and has high detection efficiency.

  14. An improved marriage in honey bees optimization algorithm for single objective unconstrained optimization.

    PubMed

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416

  15. An Improved Marriage in Honey Bees Optimization Algorithm for Single Objective Unconstrained Optimization

    PubMed Central

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416

  16. Obstacle avoidance planning of space manipulator end-effector based on improved ant colony algorithm.

    PubMed

    Zhou, Dongsheng; Wang, Lan; Zhang, Qiang

    2016-01-01

    With the development of aerospace engineering, the space on-orbit servicing has been brought more attention to many scholars. Obstacle avoidance planning of space manipulator end-effector also attracts increasing attention. This problem is complex due to the existence of obstacles. Therefore, it is essential to avoid obstacles in order to improve planning of space manipulator end-effector. In this paper, we proposed an improved ant colony algorithm to solve this problem, which is effective and simple. Firstly, the models were established respectively, including the kinematic model of space manipulator and expression of valid path in space environment. Secondly, we described an improved ant colony algorithm in detail, which can avoid trapping into local optimum. The search strategy, transfer rules, and pheromone update methods were all adjusted. Finally, the improved ant colony algorithm was compared with the classic ant colony algorithm through the experiments. The simulation results verify the correctness and effectiveness of the proposed algorithm. PMID:27186473

  17. An improved algorithm of mask image dodging for aerial image

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi

    2011-12-01

    The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.

  18. Fast algorithms for improved speech coding and recognition

    NASA Astrophysics Data System (ADS)

    Turner, J. M.; Morf, M.; Stirling, W.; Shynk, J.; Huang, S. S.

    1983-12-01

    This research effort has studied estimation techniques for processes that contain Gaussian noise and jump components, and classification methods for transitional signals by using recursive estimation with vector quantization. The major accomplishments presented are an algorithm for joint estimation of excitation and vocal tract response, a pitch pulse location method using recursive least squares estimation, and a stop consonant recognition method using recursive estimation and vector quantization.

  19. Protein sequence classification with improved extreme learning machine algorithms.

    PubMed

    Cao, Jiuwen; Xiong, Lianglin

    2014-01-01

    Precisely classifying a protein sequence from a large biological protein sequences database plays an important role for developing competitive pharmacological products. Comparing the unseen sequence with all the identified protein sequences and returning the category index with the highest similarity scored protein, conventional methods are usually time-consuming. Therefore, it is urgent and necessary to build an efficient protein sequence classification system. In this paper, we study the performance of protein sequence classification using SLFNs. The recent efficient extreme learning machine (ELM) and its invariants are utilized as the training algorithms. The optimal pruned ELM is first employed for protein sequence classification in this paper. To further enhance the performance, the ensemble based SLFNs structure is constructed where multiple SLFNs with the same number of hidden nodes and the same activation function are used as ensembles. For each ensemble, the same training algorithm is adopted. The final category index is derived using the majority voting method. Two approaches, namely, the basic ELM and the OP-ELM, are adopted for the ensemble based SLFNs. The performance is analyzed and compared with several existing methods using datasets obtained from the Protein Information Resource center. The experimental results show the priority of the proposed algorithms. PMID:24795876

  20. An Effective Hybrid Cuckoo Search Algorithm with Improved Shuffled Frog Leaping Algorithm for 0-1 Knapsack Problems

    PubMed Central

    Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  1. An effective hybrid cuckoo search algorithm with improved shuffled frog leaping algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  2. [An improved fast algorithm for ray casting volume rendering of medical images].

    PubMed

    Tao, Ling; Wang, Huina; Tian, Zhiliang

    2006-10-01

    Ray casting algorithm can obtain better quality images in volume rendering, however, it presents some problems such as powerful computing capacity and slow rendering velocity. Therefore, a new fast algorithm of ray casting volume rendering is proposed in this paper. This algorithm reduces matrix computation by the matrix transformation characteristics of re-sampling points in two coordinate system, so re-sampled computational process is accelerated. By extending the Bresenham algorithm to three dimension and utilizing boundary box technique, this algorithm avoids the sampling in empty voxel and greatly improves the efficiency of ray casting. The experiment results show that the improved acceleration algorithm can produce the required quality images, at the same time reduces the total operations remarkably, and speeds up the volume rendering. PMID:17121341

  3. Improved genetic algorithm for the protein folding problem by use of a Cartesian combination operator.

    PubMed Central

    Rabow, A. A.; Scheraga, H. A.

    1996-01-01

    We have devised a Cartesian combination operator and coding scheme for improving the performance of genetic algorithms applied to the protein folding problem. The genetic coding consists of the C alpha Cartesian coordinates of the protein chain. The recombination of the genes of the parents is accomplished by: (1) a rigid superposition of one parent chain on the other, to make the relation of Cartesian coordinates meaningful, then, (2) the chains of the children are formed through a linear combination of the coordinates of their parents. The children produced with this Cartesian combination operator scheme have similar topology and retain the long-range contacts of their parents. The new scheme is significantly more efficient than the standard genetic algorithm methods for locating low-energy conformations of proteins. The considerable superiority of genetic algorithms over Monte Carlo optimization methods is also demonstrated. We have also devised a new dynamic programming lattice fitting procedure for use with the Cartesian combination operator method. The procedure finds excellent fits of real-space chains to the lattice while satisfying bond-length, bond-angle, and overlap constraints. PMID:8880904

  4. An improved fusion algorithm for infrared and visible images based on multi-scale transform

    NASA Astrophysics Data System (ADS)

    Li, He; Liu, Lei; Huang, Wei; Yue, Chao

    2016-01-01

    In this paper, an improved fusion algorithm for infrared and visible images based on multi-scale transform is proposed. First of all, Morphology-Hat transform is used for an infrared image and a visible image separately. Then two images were decomposed into high-frequency and low-frequency images by contourlet transform (CT). The fusion strategy of high-frequency images is based on mean gradient and the fusion strategy of low-frequency images is based on Principal Component Analysis (PCA). Finally, the final fused image is obtained by using the inverse contourlet transform (ICT). The experiments and results demonstrate that the proposed method can significantly improve image fusion performance, accomplish notable target information and high contrast and preserve rich details information at the same time.

  5. Effective followership: A standardized algorithm to resolve clinical conflicts and improve teamwork.

    PubMed

    Sculli, Gary L; Fore, Amanda M; Sine, David M; Paull, Douglas E; Tschannen, Dana; Aebersold, Michelle; Seagull, F Jacob; Bagian, James P

    2015-01-01

    In healthcare, the sustained presence of hierarchy between team members has been cited as a common contributor to communication breakdowns. Hierarchy serves to accentuate either actual or perceived chains of command, which may result in team members failing to challenge decisions made by leaders, despite concerns about adverse patient outcomes. While other tools suggest improved communication, none focus specifically on communication skills for team followers, nor do they provide techniques to immediately challenge authority and escalate assertiveness at a given moment in real time. This article presents data that show one such strategy, called the Effective Followership Algorithm, offering statistically significant improvements in team communication across the professional continuum from students and residents to experienced clinicians. PMID:26227290

  6. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  7. Multiangle dynamic light scattering analysis using an improved recursion algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Li, Wei; Wang, Wanyan; Zeng, Xianjiang; Chen, Junyao; Du, Peng; Yang, Kecheng

    2015-10-01

    Multiangle dynamic light scattering (MDLS) compensates for the low information in a single-angle dynamic light scattering (DLS) measurement by combining the light intensity autocorrelation functions from a number of measurement angles. Reliable estimation of PSD from MDLS measurements requires accurate determination of the weighting coefficients and an appropriate inversion method. We propose the Recursion Nonnegative Phillips-Twomey (RNNPT) algorithm, which is insensitive to the noise of correlation function data, for PSD reconstruction from MDLS measurements. The procedure includes two main steps: 1) the calculation of the weighting coefficients by the recursion method, and 2) the PSD estimation through the RNNPT algorithm. And we obtained suitable regularization parameters for the algorithm by using MR-L-curve since the overall computational cost of this method is sensibly less than that of the L-curve for large problems. Furthermore, convergence behavior of the MR-L-curve method is in general superior to that of the L-curve method and the error of MR-L-curve method is monotone decreasing. First, the method was evaluated on simulated unimodal lognormal PSDs and multimodal lognormal PSDs. For comparison, reconstruction results got by a classical regularization method were included. Then, to further study the stability and sensitivity of the proposed method, all examples were analyzed using correlation function data with different levels of noise. The simulated results proved that RNNPT method yields more accurate results in the determination of PSDs from MDLS than those obtained with the classical regulation method for both unimodal and multimodal PSDs.

  8. Research on an Improved Medical Image Enhancement Algorithm Based on P-M Model.

    PubMed

    Dong, Beibei; Yang, Jingjing; Hao, Shangfu; Zhang, Xiao

    2015-01-01

    Image enhancement can improve the detail of the image and so as to achieve the purpose of the identification of the image. At present, the image enhancement is widely used in medical images, which can help doctor's diagnosis. IEABPM (Image Enhancement Algorithm Based on P-M Model) is one of the most common image enhancement algorithms. However, it may cause the lost of the texture details and other features. To solve the problems, this paper proposes an IIEABPM (Improved Image Enhancement Algorithm Based on P-M Model). Simulation demonstrates that IIEABPM can effectively solve the problems of IEABPM, and improve image clarity, image contrast, and image brightness. PMID:26628929

  9. Improving synthetical stellar libraries using the cross-entropy algorithm

    NASA Astrophysics Data System (ADS)

    Martins, L. P.; Vitoriano, R.; Coelho, P.; Caproni, A.

    Stellar libraries are fundamental tools for the study of stellar populations since they are one of the fundamental ingredients for stellar population synthesis codes. We have implemented an innovative method to perform the calibration of atomic line lists used to generate the synthetic spectra of theoretical libraries, much more robust and efficient than the methods so far used. Here we present the adaptation and validation of this method, called Cross-Entropy algorithm, to the calibration of atomic line list. We show that the method is extremely efficient for calibration of atomic line lists when the transition contributes with at least 10^{-4} of the continuum flux.

  10. An Improved Artificial Bee Colony Algorithm for Solving Hybrid Flexible Flowshop With Dynamic Operation Skipping.

    PubMed

    Li, Jun-Qing; Pan, Quan-Ke; Duan, Pei-Yong

    2016-06-01

    In this paper, we propose an improved discrete artificial bee colony (DABC) algorithm to solve the hybrid flexible flowshop scheduling problem with dynamic operation skipping features in molten iron systems. First, each solution is represented by a two-vector-based solution representation, and a dynamic encoding mechanism is developed. Second, a flexible decoding strategy is designed. Next, a right-shift strategy considering the problem characteristics is developed, which can clearly improve the solution quality. In addition, several skipping and scheduling neighborhood structures are presented to balance the exploration and exploitation ability. Finally, an enhanced local search is embedded in the proposed algorithm to further improve the exploitation ability. The proposed algorithm is tested on sets of the instances that are generated based on the realistic production. Through comprehensive computational comparisons and statistical analysis, the highly effective performance of the proposed DABC algorithm is favorably compared against several presented algorithms, both in solution quality and efficiency. PMID:26126292

  11. Inverse transient radiation analysis in one-dimensional participating slab using improved Ant Colony Optimization algorithms

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Qi, H.; Ren, Y. T.; Sun, S. C.; Ruan, L. M.

    2014-01-01

    As a heuristic intelligent optimization algorithm, the Ant Colony Optimization (ACO) algorithm was applied to the inverse problem of a one-dimensional (1-D) transient radiative transfer in present study. To illustrate the performance of this algorithm, the optical thickness and scattering albedo of the 1-D participating slab medium were retrieved simultaneously. The radiative reflectance simulated by Monte-Carlo Method (MCM) and Finite Volume Method (FVM) were used as measured and estimated value for the inverse analysis, respectively. To improve the accuracy and efficiency of the Basic Ant Colony Optimization (BACO) algorithm, three improved ACO algorithms, i.e., the Region Ant Colony Optimization algorithm (RACO), Stochastic Ant Colony Optimization algorithm (SACO) and Homogeneous Ant Colony Optimization algorithm (HACO), were developed. By the HACO algorithm presented, the radiative parameters could be estimated accurately, even with noisy data. In conclusion, the HACO algorithm is demonstrated to be effective and robust, which had the potential to be implemented in various fields of inverse radiation problems.

  12. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion

    PubMed Central

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  13. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.

    PubMed

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  14. Improving ecological forecasts of copepod community dynamics using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Record, N. R.; Pershing, A. J.; Runge, J. A.; Mayo, C. A.; Monger, B. C.; Chen, C.

    2010-08-01

    The validity of computational models is always in doubt. Skill assessment and validation are typically done by demonstrating that output is in agreement with empirical data. We test this approach by using a genetic algorithm to parameterize a biological-physical coupled copepod population dynamics computation. The model is applied to Cape Cod Bay, Massachusetts, and is designed for operational forecasting. By running twin experiments on terms in this dynamical system, we demonstrate that a good fit to data does not necessarily imply a valid parameterization. An ensemble of good fits, however, provides information on the accuracy of parameter values, on the functional importance of parameters, and on the ability to forecast accurately with an incorrect set of parameters. Additionally, we demonstrate that the technique is a useful tool for operational forecasting.

  15. Algorithms to Improve the Prediction of Postprandial Insulinaemia in Response to Common Foods.

    PubMed

    Bell, Kirstine J; Petocz, Peter; Colagiuri, Stephen; Brand-Miller, Jennie C

    2016-01-01

    Dietary patterns that induce excessive insulin secretion may contribute to worsening insulin resistance and beta-cell dysfunction. Our aim was to generate mathematical algorithms to improve the prediction of postprandial glycaemia and insulinaemia for foods of known nutrient composition, glycemic index (GI) and glycemic load (GL). We used an expanded database of food insulin index (FII) values generated by testing 1000 kJ portions of 147 common foods relative to a reference food in lean, young, healthy volunteers. Simple and multiple linear regression analyses were applied to validate previously generated equations for predicting insulinaemia, and develop improved predictive models. Large differences in insulinaemic responses within and between food groups were evident. GL, GI and available carbohydrate content were the strongest predictors of the FII, explaining 55%, 51% and 47% of variation respectively. Fat, protein and sugar were significant but relatively weak predictors, accounting for only 31%, 7% and 13% of the variation respectively. Nutritional composition alone explained only 50% of variability. The best algorithm included a measure of glycemic response, sugar and protein content and explained 78% of variation. Knowledge of the GI or glycaemic response to 1000 kJ portions together with nutrient composition therefore provides a good approximation for ranking of foods according to their "insulin demand". PMID:27070641

  16. Algorithms to Improve the Prediction of Postprandial Insulinaemia in Response to Common Foods

    PubMed Central

    Bell, Kirstine J.; Petocz, Peter; Colagiuri, Stephen; Brand-Miller, Jennie C.

    2016-01-01

    Dietary patterns that induce excessive insulin secretion may contribute to worsening insulin resistance and beta-cell dysfunction. Our aim was to generate mathematical algorithms to improve the prediction of postprandial glycaemia and insulinaemia for foods of known nutrient composition, glycemic index (GI) and glycemic load (GL). We used an expanded database of food insulin index (FII) values generated by testing 1000 kJ portions of 147 common foods relative to a reference food in lean, young, healthy volunteers. Simple and multiple linear regression analyses were applied to validate previously generated equations for predicting insulinaemia, and develop improved predictive models. Large differences in insulinaemic responses within and between food groups were evident. GL, GI and available carbohydrate content were the strongest predictors of the FII, explaining 55%, 51% and 47% of variation respectively. Fat, protein and sugar were significant but relatively weak predictors, accounting for only 31%, 7% and 13% of the variation respectively. Nutritional composition alone explained only 50% of variability. The best algorithm included a measure of glycemic response, sugar and protein content and explained 78% of variation. Knowledge of the GI or glycaemic response to 1000 kJ portions together with nutrient composition therefore provides a good approximation for ranking of foods according to their “insulin demand”. PMID:27070641

  17. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  18. FRESCO+: an improved O2 A-band cloud retrieval algorithm for tropospheric trace gas retrievals

    NASA Astrophysics Data System (ADS)

    Wang, P.; Stammes, P.; van der A, R.; Pinardi, G.; van Roozendael, M.

    2008-05-01

    The FRESCO (Fast Retrieval Scheme for Clouds from the Oxygen A-band) algorithm has been used to retrieve cloud information from measurements of the O2 A-band around 760 nm by GOME, SCIAMACHY and GOME-2. The cloud parameters retrieved by FRESCO are the effective cloud fraction and cloud pressure, which are used for cloud correction in the retrieval of trace gases like O3 and NO2. To improve the cloud pressure retrieval for partly cloudy scenes, single Rayleigh scattering has been included in an improved version of the algorithm, called FRESCO+. We compared FRESCO+ and FRESCO effective cloud fractions and cloud pressures using simulated spectra and one month of GOME measured spectra. As expected, FRESCO+ gives more reliable cloud pressures over partly cloudy pixels. Simulations and comparisons with ground-based radar/lidar measurements of clouds shows that the FRESCO+ cloud pressure is about the optical midlevel of the cloud. Globally averaged, the FRESCO+ cloud pressure is about 50 hPa higher than the FRESCO cloud pressure, while the FRESCO+ effective cloud fraction is about 0.01 larger. The effect of FRESCO+ cloud parameters on O3 and NO2 vertical column densities (VCD) is studied using SCIAMACHY data and ground-based DOAS measurements. We find that the FRESCO+ algorithm has a significant effect on tropospheric NO2 retrievals but a minor effect on total O3 retrievals. The retrieved SCIAMACHY tropospheric NO2 VCDs using FRESCO+ cloud parameters (v1.1) are lower than the tropospheric NO2 VCDs which used FRESCO cloud parameters (v1.04), in particular over heavily polluted areas with low clouds. The difference between SCIAMACHY tropospheric NO2 VCDs v1.1 and ground-based MAXDOAS measurements performed in Cabauw, The Netherlands, during the DANDELIONS campaign is about -2.12×1014 molec cm-2.

  19. FRESCO+: an improved O2 A-band cloud retrieval algorithm for tropospheric trace gas retrievals

    NASA Astrophysics Data System (ADS)

    Wang, P.; Stammes, P.; van der A, R.; Pinardi, G.; van Roozendael, M.

    2008-11-01

    The FRESCO (Fast Retrieval Scheme for Clouds from the Oxygen A-band) algorithm has been used to retrieve cloud information from measurements of the O2 A-band around 760 nm by GOME, SCIAMACHY and GOME-2. The cloud parameters retrieved by FRESCO are the effective cloud fraction and cloud pressure, which are used for cloud correction in the retrieval of trace gases like O3 and NO2. To improve the cloud pressure retrieval for partly cloudy scenes, single Rayleigh scattering has been included in an improved version of the algorithm, called FRESCO+. We compared FRESCO+ and FRESCO effective cloud fractions and cloud pressures using simulated spectra and one month of GOME measured spectra. As expected, FRESCO+ gives more reliable cloud pressures over partly cloudy pixels. Simulations and comparisons with ground-based radar/lidar measurements of clouds show that the FRESCO+ cloud pressure is about the optical midlevel of the cloud. Globally averaged, the FRESCO+ cloud pressure is about 50 hPa higher than the FRESCO cloud pressure, while the FRESCO+ effective cloud fraction is about 0.01 larger. The effect of FRESCO+ cloud parameters on O3 and NO2 vertical column density (VCD) retrievals is studied using SCIAMACHY data and ground-based DOAS measurements. We find that the FRESCO+ algorithm has a significant effect on tropospheric NO2 retrievals but a minor effect on total O3 retrievals. The retrieved SCIAMACHY tropospheric NO2 VCDs using FRESCO+ cloud parameters (v1.1) are lower than the tropospheric NO2VCDs which used FRESCO cloud parameters (v1.04), in particular over heavily polluted areas with low clouds. The difference between SCIAMACHY tropospheric NO2 VCDs v1.1 and ground-based MAXDOAS measurements performed in Cabauw, The Netherlands, during the DANDELIONS campaign is about -2.12×1014molec cm-2.

  20. Improved Infomax algorithm of independent component analysis applied to fMRI data

    NASA Astrophysics Data System (ADS)

    Wu, Xia; Yao, Li; Long, Zhi-ying; Wu, Hui

    2004-05-01

    Independent component analysis (ICA) is a technique that attempts to separate data into maximally independent groups. Several ICA algorithms have been proposed in the neural network literature. Among these algorithms applied to fMRI data, the Infomax algorithm has been used more widely so far. The Infomax algorithm maximizes the information transferred in a network of nonlinear units. The nonlinear transfer function is able to pick up higher-order moments of the input distributions and reduce the redundancy between units in the output and input. But the transfer function in the Infomax algorithm is a fixed Logistic function. In this paper, an improved Infomax algorithm is proposed. In order to make transfer function match the input data better, the we add a changeable parameter to the Logistic function and estimate the parameter from the input fMRI data in two methods, 1. maximizing the correlation coefficient between the transfer function and the cumulative distribution function (c.d.f), 2. minimizing the entropy distance based on the KL divergence between the transfer function and the c.d.f. We apply the improved Infomax algorithm to the processing of fMRI data, and the results show that the improved algorithm is more effective in terms of fMRI data separation.

  1. An improved filter-u least mean square vibration control algorithm for aircraft framework.

    PubMed

    Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu

    2014-09-01

    Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance. PMID:25273765

  2. [An improved wavelet threshold algorithm for ECG denoising].

    PubMed

    Liu, Xiuling; Qiao, Lei; Yang, Jianli; Dong, Bin; Wang, Hongrui

    2014-06-01

    Due to the characteristics and environmental factors, electrocardiogram (ECG) signals are usually interfered by noises in the course of signal acquisition, so it is crucial for ECG intelligent analysis to eliminate noises in ECG signals. On the basis of wavelet transform, threshold parameters were improved and a more appropriate threshold expression was proposed. The discrete wavelet coefficients were processed using the improved threshold parameters, the accurate wavelet coefficients without noises were gained through inverse discrete wavelet transform, and then more original signal coefficients could be preserved. MIT-BIH arrythmia database was used to validate the method. Simulation results showed that the improved method could achieve better denoising effect than the traditional ones. PMID:25219225

  3. Research on WNN Modeling for Gold Price Forecasting Based on Improved Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Gold price forecasting has been a hot issue in economics recently. In this work, wavelet neural network (WNN) combined with a novel artificial bee colony (ABC) algorithm is proposed for this gold price forecasting issue. In this improved algorithm, the conventional roulette selection strategy is discarded. Besides, the convergence statuses in a previous cycle of iteration are fully utilized as feedback messages to manipulate the searching intensity in a subsequent cycle. Experimental results confirm that this new algorithm converges faster than the conventional ABC when tested on some classical benchmark functions and is effective to improve modeling capacity of WNN regarding the gold price forecasting scheme. PMID:24744773

  4. Research on WNN modeling for gold price forecasting based on improved artificial bee colony algorithm.

    PubMed

    Li, Bai

    2014-01-01

    Gold price forecasting has been a hot issue in economics recently. In this work, wavelet neural network (WNN) combined with a novel artificial bee colony (ABC) algorithm is proposed for this gold price forecasting issue. In this improved algorithm, the conventional roulette selection strategy is discarded. Besides, the convergence statuses in a previous cycle of iteration are fully utilized as feedback messages to manipulate the searching intensity in a subsequent cycle. Experimental results confirm that this new algorithm converges faster than the conventional ABC when tested on some classical benchmark functions and is effective to improve modeling capacity of WNN regarding the gold price forecasting scheme. PMID:24744773

  5. An improved space-based algorithm for recognizing vehicle models from the side view

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Ding, Youdong; Zhang, Li; Li, Rong; Zhu, Jiang; Xie, Zhifeng

    2015-12-01

    Vehicle model matching problem from the side view is a problem meets the practical needs of actual users, but less focus by researchers. We propose a improved feature space-based algorithm for this problem. The algorithm combines the various advantages of some classic algorithms, and effectively combining global and local feature, eliminate data redundancy and improve data divisibility. And finally complete the classification by quick and efficient KNN. The real scene test results show that the proposed method is robust, accurate, insensitive to external factors, adaptable to large angle deviations, and can be applied to a formal application.

  6. Image Compression Algorithm Altered to Improve Stereo Ranging

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    2008-01-01

    A report discusses a modification of the ICER image-data-compression algorithm to increase the accuracy of ranging computations performed on compressed stereoscopic image pairs captured by cameras aboard the Mars Exploration Rovers. (ICER and variants thereof were discussed in several prior NASA Tech Briefs articles.) Like many image compressors, ICER was designed to minimize a mean-square-error measure of distortion in reconstructed images as a function of the compressed data volume. The present modification of ICER was preceded by formulation of an alternative error measure, an image-quality metric that focuses on stereoscopic-ranging quality and takes account of image-processing steps in the stereoscopic-ranging process. This metric was used in empirical evaluation of bit planes of wavelet-transform subbands that are generated in ICER. The present modification, which is a change in a bit-plane prioritization rule in ICER, was adopted on the basis of this evaluation. This modification changes the order in which image data are encoded, such that when ICER is used for lossy compression, better stereoscopic-ranging results are obtained as a function of the compressed data volume.

  7. Improved RMR Rock Mass Classification Using Artificial Intelligence Algorithms

    NASA Astrophysics Data System (ADS)

    Gholami, Raoof; Rasouli, Vamegh; Alimoradi, Andisheh

    2013-09-01

    Rock mass classification systems such as rock mass rating (RMR) are very reliable means to provide information about the quality of rocks surrounding a structure as well as to propose suitable support systems for unstable regions. Many correlations have been proposed to relate measured quantities such as wave velocity to rock mass classification systems to limit the associated time and cost of conducting the sampling and mechanical tests conventionally used to calculate RMR values. However, these empirical correlations have been found to be unreliable, as they usually overestimate or underestimate the RMR value. The aim of this paper is to compare the results of RMR classification obtained from the use of empirical correlations versus machine-learning methodologies based on artificial intelligence algorithms. The proposed methods were verified based on two case studies located in northern Iran. Relevance vector regression (RVR) and support vector regression (SVR), as two robust machine-learning methodologies, were used to predict the RMR for tunnel host rocks. RMR values already obtained by sampling and site investigation at one tunnel were taken into account as the output of the artificial networks during training and testing phases. The results reveal that use of empirical correlations overestimates the predicted RMR values. RVR and SVR, however, showed more reliable results, and are therefore suggested for use in RMR classification for design purposes of rock structures.

  8. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  9. Research on aviation unsafe incidents classification with improved TF-IDF algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yanhua; Zhang, Zhiyuan; Huo, Weigang

    2016-05-01

    The text content of Aviation Safety Confidential Reports contains a large number of valuable information. Term frequency-inverse document frequency algorithm is commonly used in text analysis, but it does not take into account the sequential relationship of the words in the text and its role in semantic expression. According to the seven category labels of civil aviation unsafe incidents, aiming at solving the problems of TF-IDF algorithm, this paper improved TF-IDF algorithm based on co-occurrence network; established feature words extraction and words sequential relations for classified incidents. Aviation domain lexicon was used to improve the accuracy rate of classification. Feature words network model was designed for multi-documents unsafe incidents classification, and it was used in the experiment. Finally, the classification accuracy of improved algorithm was verified by the experiments.

  10. Improved wavelet packet classification algorithm for vibrational intrusions in distributed fiber-optic monitoring systems

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo

    2015-05-01

    An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.

  11. Ballistic target tracking algorithm based on improved particle filtering

    NASA Astrophysics Data System (ADS)

    Ning, Xiao-lei; Chen, Zhan-qi; Li, Xiao-yang

    2015-10-01

    Tracking ballistic re-entry target is a typical nonlinear filtering problem. In order to track the ballistic re-entry target in the nonlinear and non-Gaussian complex environment, a novel chaos map particle filter (CMPF) is used to estimate the target state. CMPF has better performance in application to estimate the state and parameter of nonlinear and non-Gassuian system. The Monte Carlo simulation results show that, this method can effectively solve particle degeneracy and particle impoverishment problem by improving the efficiency of particle sampling to obtain the better particles to part in estimation. Meanwhile CMPF can improve the state estimation precision and convergence velocity compared with EKF, UKF and the ordinary particle filter.

  12. How tracer objects can improve competitive learning algorithms in astronomy

    NASA Astrophysics Data System (ADS)

    Hernandez-Pajares, M.; Floris, J.; Murtagh, F.

    The main objective of this paper is to discuss how the use of tracer objects in competitive learning can improve results in stellar classification. To do this, we work with a Kohonen network applied to a reduced sample of the Hipparcos Input Catalogue, which contains missing values. The use of synthetic stars as tracer objects allows us to determine the discrimination quality and to find the best final values of the cluster centroids, or neuron weights.

  13. Using the Significant Learning Taxonomy and Active Learning to Improve Accounting Education

    ERIC Educational Resources Information Center

    Killian, Larita J.; Brandon, Christopher D.

    2009-01-01

    Like other members of the academy, accounting professors are challenged to improve student learning. We must help students move beyond the "bean counter" role and develop higher-level skills such as analysis, synthesis, and problem-solving. The Significant Learning Taxonomy was used as a template to improve learning in an introductory accounting…

  14. An Improved Fuzzy c-Means Clustering Algorithm Based on Shadowed Sets and PSO

    PubMed Central

    Zhang, Jian; Shen, Ling

    2014-01-01

    To organize the wide variety of data sets automatically and acquire accurate classification, this paper presents a modified fuzzy c-means algorithm (SP-FCM) based on particle swarm optimization (PSO) and shadowed sets to perform feature clustering. SP-FCM introduces the global search property of PSO to deal with the problem of premature convergence of conventional fuzzy clustering, utilizes vagueness balance property of shadowed sets to handle overlapping among clusters, and models uncertainty in class boundaries. This new method uses Xie-Beni index as cluster validity and automatically finds the optimal cluster number within a specific range with cluster partitions that provide compact and well-separated clusters. Experiments show that the proposed approach significantly improves the clustering effect. PMID:25477953

  15. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  16. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm

    PubMed Central

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  17. An improved recommendation algorithm via weakening indirect linkage effect

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Qiu, Tian; Shen, Xiao-Quan

    2015-07-01

    We propose an indirect-link-weakened mass diffusion method (IMD), by considering the indirect linkage and the source object heterogeneity effect in the mass diffusion (MD) recommendation method. Experimental results on the MovieLens, Netflix, and RYM datasets show that, the IMD method greatly improves both the recommendation accuracy and diversity, compared with a heterogeneity-weakened MD method (HMD), which only considers the source object heterogeneity. Moreover, the recommendation accuracy of the cold objects is also better elevated in the IMD than the HMD method. It suggests that eliminating the redundancy induced by the indirect linkages could have a prominent effect on the recommendation efficiency in the MD method. Project supported by the National Natural Science Foundation of China (Grant No. 11175079) and the Young Scientist Training Project of Jiangxi Province, China (Grant No. 20133BCB23017).

  18. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  19. Visual Tracking Based on an Improved Online Multiple Instance Learning Algorithm.

    PubMed

    Wang, Li Jia; Zhang, Hua

    2016-01-01

    An improved online multiple instance learning (IMIL) for a visual tracking algorithm is proposed. In the IMIL algorithm, the importance of each instance contributing to a bag probability is with respect to their probabilities. A selection strategy based on an inner product is presented to choose weak classifier from a classifier pool, which avoids computing instance probabilities and bag probability M times. Furthermore, a feedback strategy is presented to update weak classifiers. In the feedback update strategy, different weights are assigned to the tracking result and template according to the maximum classifier score. Finally, the presented algorithm is compared with other state-of-the-art algorithms. The experimental results demonstrate that the proposed tracking algorithm runs in real-time and is robust to occlusion and appearance changes. PMID:26843855

  20. Visual Tracking Based on an Improved Online Multiple Instance Learning Algorithm

    PubMed Central

    Wang, Li Jia; Zhang, Hua

    2016-01-01

    An improved online multiple instance learning (IMIL) for a visual tracking algorithm is proposed. In the IMIL algorithm, the importance of each instance contributing to a bag probability is with respect to their probabilities. A selection strategy based on an inner product is presented to choose weak classifier from a classifier pool, which avoids computing instance probabilities and bag probability M times. Furthermore, a feedback strategy is presented to update weak classifiers. In the feedback update strategy, different weights are assigned to the tracking result and template according to the maximum classifier score. Finally, the presented algorithm is compared with other state-of-the-art algorithms. The experimental results demonstrate that the proposed tracking algorithm runs in real-time and is robust to occlusion and appearance changes. PMID:26843855

  1. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method. PMID:20703716

  2. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  3. Improved Algorithm for Analysis of DNA Sequences Using Multiresolution Transformation

    PubMed Central

    Inbamalar, T. M.; Sivakumar, R.

    2015-01-01

    Bioinformatics and genomic signal processing use computational techniques to solve various biological problems. They aim to study the information allied with genetic materials such as the deoxyribonucleic acid (DNA), the ribonucleic acid (RNA), and the proteins. Fast and precise identification of the protein coding regions in DNA sequence is one of the most important tasks in analysis. Existing digital signal processing (DSP) methods provide less accurate and computationally complex solution with greater background noise. Hence, improvements in accuracy, computational complexity, and reduction in background noise are essential in identification of the protein coding regions in the DNA sequences. In this paper, a new DSP based method is introduced to detect the protein coding regions in DNA sequences. Here, the DNA sequences are converted into numeric sequences using electron ion interaction potential (EIIP) representation. Then discrete wavelet transformation is taken. Absolute value of the energy is found followed by proper threshold. The test is conducted using the data bases available in the National Centre for Biotechnology Information (NCBI) site. The comparative analysis is done and it ensures the efficiency of the proposed system. PMID:26000337

  4. Improved inversion algorithms for near-surface characterization

    NASA Astrophysics Data System (ADS)

    Vaziri Astaneh, Ali; Guddati, Murthy N.

    2016-08-01

    Near-surface geophysical imaging is often performed by generating surface waves, and estimating the subsurface properties through inversion, that is, iteratively matching experimentally observed dispersion curves with predicted curves from a layered half-space model of the subsurface. Key to the effectiveness of inversion is the efficiency and accuracy of computing the dispersion curves and their derivatives. This paper presents improved methodologies for both dispersion curve and derivative computation. First, it is shown that the dispersion curves can be computed more efficiently by combining an unconventional complex-length finite element method (CFEM) to model the finite depth layers, with perfectly matched discrete layers (PMDL) to model the unbounded half-space. Second, based on analytical derivatives for theoretical dispersion curves, an approximate derivative is derived for the so-called effective dispersion curve for realistic geophysical surface response data. The new derivative computation has a smoothing effect on the computation of derivatives, in comparison with traditional finite difference (FD) approach, and results in faster convergence. In addition, while the computational cost of FD differentiation is proportional to the number of model parameters, the new differentiation formula has a computational cost that is almost independent of the number of model parameters. At the end, as confirmed by synthetic and real-life imaging examples, the combination of CFEM + PMDL for dispersion calculation and the new differentiation formula results in more accurate estimates of the subsurface characteristics than the traditional methods, at a small fraction of computational effort.

  5. An improved preprocessing algorithm for haplotype inference by pure parsimony.

    PubMed

    Choi, Mun-Ho; Kang, Seung-Ho; Lim, Hyeong-Seok

    2014-08-01

    The identification of haplotypes, which encode SNPs in a single chromosome, makes it possible to perform a haplotype-based association test with disease. Given a set of genotypes from a population, the process of recovering the haplotypes, which explain the genotypes, is called haplotype inference (HI). We propose an improved preprocessing method for solving the haplotype inference by pure parsimony (HIPP), which excludes a large amount of redundant haplotypes by detecting some groups of haplotypes that are dispensable for optimal solutions. The method uses only inclusion relations between groups of haplotypes but dramatically reduces the number of candidate haplotypes; therefore, it causes the computational time and memory reduction of real HIPP solvers. The proposed method can be easily coupled with a wide range of optimization methods which consider a set of candidate haplotypes explicitly. For the simulated and well-known benchmark datasets, the experimental results show that our method coupled with a classical exact HIPP solver run much faster than the state-of-the-art solver and can solve a large number of instances that were so far unaffordable in a reasonable time. PMID:25152045

  6. Improved algorithm for analysis of DNA sequences using multiresolution transformation.

    PubMed

    Inbamalar, T M; Sivakumar, R

    2015-01-01

    Bioinformatics and genomic signal processing use computational techniques to solve various biological problems. They aim to study the information allied with genetic materials such as the deoxyribonucleic acid (DNA), the ribonucleic acid (RNA), and the proteins. Fast and precise identification of the protein coding regions in DNA sequence is one of the most important tasks in analysis. Existing digital signal processing (DSP) methods provide less accurate and computationally complex solution with greater background noise. Hence, improvements in accuracy, computational complexity, and reduction in background noise are essential in identification of the protein coding regions in the DNA sequences. In this paper, a new DSP based method is introduced to detect the protein coding regions in DNA sequences. Here, the DNA sequences are converted into numeric sequences using electron ion interaction potential (EIIP) representation. Then discrete wavelet transformation is taken. Absolute value of the energy is found followed by proper threshold. The test is conducted using the data bases available in the National Centre for Biotechnology Information (NCBI) site. The comparative analysis is done and it ensures the efficiency of the proposed system. PMID:26000337

  7. A strictly improving Phase 1 algorithm using least-squares subproblems

    SciTech Connect

    Leichner, S.A.; Dantzig, G.B.; Davis, J.W.

    1992-04-01

    Although the simplex method`s performance in solving linear programming problems is usually quite good, it does not guarantee strict improvement at each iteration on degenerate problems. Instead of trying to recognize and avoid degenerate steps in the simplex method, we have developed a new Phase I algorithm that is completely impervious to degeneracy, with strict improvement attained at each iteration. It is also noted that the new Phase I algorithm is closely related to a number of existing algorithms. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase I algorithm were almost 3.5 times faster than the simplex method; on some problems, it was over 10 times faster.

  8. A strictly improving Phase 1 algorithm using least-squares subproblems

    SciTech Connect

    Leichner, S.A.; Dantzig, G.B.; Davis, J.W.

    1992-04-01

    Although the simplex method's performance in solving linear programming problems is usually quite good, it does not guarantee strict improvement at each iteration on degenerate problems. Instead of trying to recognize and avoid degenerate steps in the simplex method, we have developed a new Phase I algorithm that is completely impervious to degeneracy, with strict improvement attained at each iteration. It is also noted that the new Phase I algorithm is closely related to a number of existing algorithms. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase I algorithm were almost 3.5 times faster than the simplex method; on some problems, it was over 10 times faster.

  9. Improvements in algorithms for phenotype inference: the NAT2 example.

    PubMed

    Selinski, Silvia; Blaszkewicz, Meinolf; Ickstadt, Katja; Hengstler, Jan G; Golka, Klaus

    2014-02-01

    Numerous studies have analyzed the impact of N-acetyltransferase 2 (NAT2) polymorphisms on drug efficacy, side effects as well as cancer risk. Here, we present the state of the art of deriving haplotypes from polymorphisms and discuss the available software. PHASE v2.1 is currently considered a gold standard for NAT2 haplotype assignment. In vitro studies have shown that some slow acetylation genotypes confer reduced protein stability. This has been observed particularly for G191A, T341C and G590A. Substantial ethnic variations of the acetylation status have been described. Probably, upcoming agriculture and the resulting change in diet caused a selection pressure for slow acetylation. In recent years much research has been done to reduce the complexity of NAT2 genotyping. Deriving the haplotype from seven SNPs is still considered a gold standard. However, meanwhile several studies have shown that a two-SNP combination, C282T and T341C, results in a similarly good distinction in Caucasians. However, attempts to further reduce complexity to only one 'tagging SNP' (rs1495741) may lead to wrong predictions where phenotypically slow acetylators were genotyped as intermediate or rapid. Numerous studies have shown that slow NAT2 haplotypes are associated with increased urinary bladder cancer risk and increased risk of anti-tuberculosis drug-induced hepatotoxicity. A drawback of the current practice of solely discriminating slow, intermediate and rapid genotypes for phenotype inference is limited resolution of differences between slow acetylators. Future developments to differentiate between slow and ultra-slow genotypes may further improve individualized drug dosing and epidemiological studies of cancer risk. PMID:24524665

  10. Improvement of wavelet threshold filtered back-projection image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-11-01

    Image reconstruction technique has been applied into many fields including some medical imaging, such as X ray computer tomography (X-CT), positron emission tomography (PET) and nuclear magnetic resonance imaging (MRI) etc, but the reconstructed effects are still not satisfied because original projection data are inevitably polluted by noises in process of image reconstruction. Although some traditional filters e.g., Shepp-Logan (SL) and Ram-Lak (RL) filter have the ability to filter some noises, Gibbs oscillation phenomenon are generated and artifacts leaded by back-projection are not greatly improved. Wavelet threshold denoising can overcome the noises interference to image reconstruction. Since some inherent defects exist in the traditional soft and hard threshold functions, an improved wavelet threshold function combined with filtered back-projection (FBP) algorithm was proposed in this paper. Four different reconstruction algorithms were compared in simulated experiments. Experimental results demonstrated that this improved algorithm greatly eliminated the shortcomings of un-continuity and large distortion of traditional threshold functions and the Gibbs oscillation. Finally, the availability of this improved algorithm was verified from the comparison of two evaluation criterions, i.e. mean square error (MSE), peak signal to noise ratio (PSNR) among four different algorithms, and the optimum dual threshold values of improved wavelet threshold function was gotten.

  11. Mutual information image registration based on improved bee evolutionary genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Gang; Tu, Jingzhi

    2009-07-01

    In recent years, the mutual information is regarded as a more efficient similarity metrics in the image registration. According to the features of mutual information image registration, the Bee Evolution Genetic Algorithm (BEGA) is chosen for optimizing parameters, which imitates swarm mating. Besides, we try our best adaptively set the initial parameters to improve the BEGA. The programming result shows the wonderful precision of the algorithm.

  12. An improved service-aware multipath algorithm for wireless multimedia sensor networks

    NASA Astrophysics Data System (ADS)

    Ding, Yongjie; Tang, Ruichun; Xu, Huimin; Liu, Yafang

    2013-03-01

    Study the multipath transmission problems of the different services in Wireless Multimedia Sensor Networks (WMSN). To further effectively utilize networks resources, the multipath mechanism and service-aware is used to improve performance of OLSR(Optimized Link State Routing). A SM-OLSR(Service-aware Multipath OLSR) algorithm is proposed. An efficiency model is introduced, then multipath is built according to the routing ID and energy efficiency. Compared with other routing algorithms, simulation results show that the algorithm can provide service support for different data.

  13. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem

    PubMed Central

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA. PMID:26167171

  14. Classification of Non-Small Cell Lung Cancer Using Significance Analysis of Microarray-Gene Set Reduction Algorithm

    PubMed Central

    Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu

    2016-01-01

    Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed. PMID:27446945

  15. Improved Fault Classification in Series Compensated Transmission Line: Comparative Evaluation of Chebyshev Neural Network Training Algorithms.

    PubMed

    Vyas, Bhargav Y; Das, Biswarup; Maheshwari, Rudra Prakash

    2016-08-01

    This paper presents the Chebyshev neural network (ChNN) as an improved artificial intelligence technique for power system protection studies and examines the performances of two ChNN learning algorithms for fault classification of series compensated transmission line. The training algorithms are least-square Levenberg-Marquardt (LSLM) and recursive least-square algorithm with forgetting factor (RLSFF). The performances of these algorithms are assessed based on their generalization capability in relating the fault current parameters with an event of fault in the transmission line. The proposed algorithm is fast in response as it utilizes postfault samples of three phase currents measured at the relaying end corresponding to half-cycle duration only. After being trained with only a small part of the generated fault data, the algorithms have been tested over a large number of fault cases with wide variation of system and fault parameters. Based on the studies carried out in this paper, it has been found that although the RLSFF algorithm is faster for training the ChNN in the fault classification application for series compensated transmission lines, the LSLM algorithm has the best accuracy in testing. The results prove that the proposed ChNN-based method is accurate, fast, easy to design, and immune to the level of compensations. Thus, it is suitable for digital relaying applications. PMID:25314714

  16. Improved artificial bee colony algorithm for wavefront sensor-less system in free space optical communication

    NASA Astrophysics Data System (ADS)

    Niu, Chaojun; Han, Xiang'e.

    2015-10-01

    Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.

  17. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    NASA Astrophysics Data System (ADS)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  18. Spectrum parameter estimation in Brillouin scattering distributed temperature sensor based on cuckoo search algorithm combined with the improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Yu, Chunjuan; Fu, Xinghu; Liu, Wenzhe; Bi, Weihong

    2015-12-01

    In the distributed optical fiber sensing system based on Brillouin scattering, strain and temperature are the main measuring parameters which can be obtained by analyzing the Brillouin center frequency shift. The novel algorithm which combines the cuckoo search algorithm (CS) with the improved differential evolution (IDE) algorithm is proposed for the Brillouin scattering parameter estimation. The CS-IDE algorithm is compared with CS algorithm and analyzed in different situation. The results show that both the CS and CS-IDE algorithm have very good convergence. The analysis reveals that the CS-IDE algorithm can extract the scattering spectrum features with different linear weight ratio, linewidth combination and SNR. Moreover, the BOTDR temperature measuring system based on electron optical frequency shift is set up to verify the effectiveness of the CS-IDE algorithm. Experimental results show that there is a good linear relationship between the Brillouin center frequency shift and temperature changes.

  19. An improved coarse-grained parallel algorithm for computational acceleration of ordinary Kriging interpolation

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong

    2015-05-01

    Heavy computation limits the use of Kriging interpolation methods in many real-time applications, especially with the ever-increasing problem size. Many researchers have realized that parallel processing techniques are critical to fully exploit computational resources and feasibly solve computation-intensive problems like Kriging. Much research has addressed the parallelization of traditional approach to Kriging, but this computation-intensive procedure may not be suitable for high-resolution interpolation of spatial data. On the basis of a more effective serial approach, we propose an improved coarse-grained parallel algorithm to accelerate ordinary Kriging interpolation. In particular, the interpolation task of each unobserved point is considered as a basic parallel unit. To reduce time complexity and memory consumption, the large right hand side matrix in the Kriging linear system is transformed and fixed at only two columns and therefore no longer directly relevant to the number of unobserved points. The MPI (Message Passing Interface) model is employed to implement our parallel programs in a homogeneous distributed memory system. Experimentally, the improved parallel algorithm performs better than the traditional one in spatial interpolation of annual average precipitation in Victoria, Australia. For example, when the number of processors is 24, the improved algorithm keeps speed-up at 20.8 while the speed-up of the traditional algorithm only reaches 9.3. Likewise, the weak scaling efficiency of the improved algorithm is nearly 90% while that of the traditional algorithm almost drops to 40% with 16 processors. Experimental results also demonstrate that the performance of the improved algorithm is enhanced by increasing the problem size.

  20. Improving the Response of a Wheel Speed Sensor by Using a RLS Lattice Algorithm

    PubMed Central

    Hernandez, Wilmar

    2006-01-01

    Among the complete family of sensors for automotive safety, consumer and industrial application, speed sensors stand out as one of the most important. Actually, speed sensors have the diversity to be used in a broad range of applications. In today's automotive industry, such sensors are used in the antilock braking system, the traction control system and the electronic stability program. Also, typical applications are cam and crank shaft position/speed and wheel and turbo shaft speed measurement. In addition, they are used to control a variety of functions, including fuel injection, ignition timing in engines, and so on. However, some types of speed sensors cannot respond to very low speeds for different reasons. What is more, the main reason why such sensors are not good at detecting very low speeds is that they are more susceptible to noise when the speed of the target is low. In short, they suffer from noise and generally only work at medium to high speeds. This is one of the drawbacks of the inductive (magnetic reluctance) speed sensors and is the case under study. Furthermore, there are other speed sensors like the differential Hall Effect sensors that are relatively immune to interference and noise, but they cannot detect static fields. This limits their operations to speeds which give a switching frequency greater than a minimum operating frequency. In short, this research is focused on improving the performance of a variable reluctance speed sensor placed in a car under performance tests by using a recursive least-squares (RLS) lattice algorithm. Such an algorithm is situated in an adaptive noise canceller and carries out an optimal estimation of the relevant signal coming from the sensor, which is buried in a broad-band noise background where we have little knowledge of the noise characteristics. The experimental results are satisfactory and show a significant improvement in the signal-to-noise ratio at the system output.

  1. An improved hurricane wind vector retrieval algorithm using SeaWinds scatterometer

    NASA Astrophysics Data System (ADS)

    Laupattarakasem, Peth

    Over the last three decades, microwave remote sensing has played a significant role in ocean surface wind measurement, and several scatterometer missions have flown in space since early 1990's. Although they have been extremely successful for measuring ocean surface winds with high accuracy for the vast majority of marine weather conditions, unfortunately, the conventional scatterometer cannot measure extreme winds condition such as hurricane. The SeaWinds scatterometer, onboard the QuikSCAT satellite is NASA's only operating scatterometer at present. Like its predecessors, it measures global ocean vector winds; however, for a number of reasons, the quality of the measurements in hurricanes are significantly degraded. The most pressing issues are associated with the presence of precipitation and Ku-band saturation effects, especially in extreme wind speed regime such as tropical cyclones (hurricanes and typhoons). Under this dissertation, an improved hurricane ocean vector wind retrieval approach, named as Q-Winds, was developed using existing SeaWinds scatterometer data. This unique data processing algorithm uses combined SeaWinds active and passive measurements to extend the use of SeaWinds for tropical cyclones up to approximately 50 m/s (Hurricane Category-3). Results show that Q-Winds wind speeds are consistently superior to the standard SeaWinds Project Level 2B wind speeds for hurricane wind speed measurement, and also Q-Winds provides more reliable rain flagging algorithm for quality assurance purposes. By comparing to H*Wind, Q-Winds achieves ˜9% of error, while L2B-12.5km exhibits wind speed saturation at ˜30 m/s with error of ˜31% for high wind speed (>40 m/s).

  2. Improve the algorithmic performance of collaborative filtering by using the interevent time distribution of human behaviors

    NASA Astrophysics Data System (ADS)

    Jia, Chun-Xiao; Liu, Run-Ran

    2015-10-01

    Recently, many scaling laws of interevent time distribution of human behaviors are observed and some quantitative understanding of human behaviors are also provided by researchers. In this paper, we propose a modified collaborative filtering algorithm by making use the scaling law of human behaviors for information filtering. Extensive experimental analyses demonstrate that the accuracies on MovieLensand Last.fm datasets could be improved greatly, compared with the standard collaborative filtering. Surprisingly, further statistical analyses suggest that the present algorithm could simultaneously improve the novelty and diversity of recommendations. This work provides a creditable way for highly efficient information filtering.

  3. Fault location of underground distribution network based on RBF network optimized by improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhao, Min

    2013-03-01

    To solve the difficult problem that exists in the location of single-phase ground fault for coal mine underground distribution network, a fault location method using RBF network optimized by improved PSO algorithm based on the mapping relationship between wavelet packet transform modulus maxima of specific frequency bands transient state zero sequence current in the fault line and fault point position is presented. The simulation analysis results in the cases of different transition resistances and fault distances show that the RBF network optimized by improved PSO algorithm can obtain accurate and reliable fault location results, and the fault location perfor- mance is better than traditional RBF network.

  4. A Novel Optimization Technique to Improve Gas Recognition by Electronic Noses Based on the Enhanced Krill Herd Algorithm.

    PubMed

    Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan

    2016-01-01

    An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C₆H₆), toluene (C₇H₈), formaldehyde (CH₂O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms' applications in all E-nose application areas. PMID

  5. Improved semi-analytic algorithms for finding the flux from a cylindrical source

    SciTech Connect

    Wallace, O.J.

    1992-12-31

    Hand calculation methods for radiation shielding problems continue to be useful for scoping studies, for checking the results from sophisticated computer simulations and in teaching shielding personnel. This paper presents two algorithms which give improved results for hand calculations of the flux at a lateral detector point from a cylindrical source with an intervening slab shield parallel to the cylinder axis. The first algorithm improves the accuracy of the approximate flux flux formula of Ono and Tsuro so that results are always conservative and within a factor of two. The second algorithm uses the first algorithm and the principle of superposition of sources to give a new approximate method for finding the flux at a detector point outside the axial and radial extensions of a cylindrical source. A table of error ratios for this algorithm versus an exact calculation for a wide range of geometry parameters is also given. There is no other hand calculation method for the geometric configuration of the second algorithm available in the literature.

  6. Kidney segmentation in CT sequences using SKFCM and improved GrowCut algorithm

    PubMed Central

    2015-01-01

    Background Organ segmentation is an important step in computer-aided diagnosis and pathology detection. Accurate kidney segmentation in abdominal computed tomography (CT) sequences is an essential and crucial task for surgical planning and navigation in kidney tumor ablation. However, kidney segmentation in CT is a substantially challenging work because the intensity values of kidney parenchyma are similar to those of adjacent structures. Results In this paper, a coarse-to-fine method was applied to segment kidney from CT images, which consists two stages including rough segmentation and refined segmentation. The rough segmentation is based on a kernel fuzzy C-means algorithm with spatial information (SKFCM) algorithm and the refined segmentation is implemented with improved GrowCut (IGC) algorithm. The SKFCM algorithm introduces a kernel function and spatial constraint into fuzzy c-means clustering (FCM) algorithm. The IGC algorithm makes good use of the continuity of CT sequences in space which can automatically generate the seed labels and improve the efficiency of segmentation. The experimental results performed on the whole dataset of abdominal CT images have shown that the proposed method is accurate and efficient. The method provides a sensitivity of 95.46% with specificity of 99.82% and performs better than other related methods. Conclusions Our method achieves high accuracy in kidney segmentation and considerably reduces the time and labor required for contour delineation. In addition, the method can be expanded to 3D segmentation directly without modification. PMID:26356850

  7. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation.

    PubMed

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713

  8. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation

    PubMed Central

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713

  9. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    PubMed Central

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  10. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    PubMed

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  11. ULTRASONIC IMAGING USING A FLEXIBLE ARRAY: IMPROVEMENTS TO THE MAXIMUM CONTRAST AUTOFOCUS ALGORITHM

    SciTech Connect

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2009-03-03

    In previous work, we have presented the maximum contrast autofocus algorithm for estimating unknown imaging parameters, e.g., for imaging through complicated surfaces using a flexible ultrasonic array. This paper details recent improvements to the algorithm. The algorithm operates by maximizing the image contrast metric with respect to the imaging parameters. For a flexible array, the relative positions of the array elements are parameterized using a cubic spline function and the spline control points are estimated by iterative maximisation of the image contrast via simulated annealing. The resultant spline gives an estimate of the array geometry and the profile of the surface that it has conformed to, allowing the generation of a well-focused image. A pre-processing step is introduced to obtain an initial estimate of the array geometry, reducing the time taken for the algorithm to convergence. Experimental results are demonstrated using a flexible array prototype.

  12. An Efficient Algorithm for Maximum Clique Problem Using Improved Hopfield Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Rong Long; Tang, Zheng; Cao, Qi Ping

    The maximum clique problem is a classic graph optimization problem that is NP-hard even to approximate. For this and related reasons, it is a problem of considerable interest in theoretical computer science. The maximum clique also has several real-world applications. In this paper, an efficient algorithm for the maximum clique problem using improved Hopfield neural network is presented. In this algorithm, the internal dynamics of the Hopfield neural network is modified to efficiently increase exchange of information between neurons and permit temporary increases in the energy function in order to avoid local minima. The proposed algorithm is tested on two types of random graphs and DIMACS benchmark graphs. The simulation results show that the proposed algorithm is better than previous works for solving the maximum clique problem in terms of the computation time and the solution quality.

  13. Does videothoracoscopy improve clinical outcomes when implemented as part of a pleural empyema treatment algorithm?

    PubMed Central

    Terra, Ricardo Mingarini; Waisberg, Daniel Reis; de Almeida, José Luiz Jesus; Devido, Marcela Santana; Pêgo-Fernandes, Paulo Manuel; Jatene, Fabio Biscegli

    2012-01-01

    OBJECTIVE: We aimed to evaluate whether the inclusion of videothoracoscopy in a pleural empyema treatment algorithm would change the clinical outcome of such patients. METHODS: This study performed quality-improvement research. We conducted a retrospective review of patients who underwent pleural decortication for pleural empyema at our institution from 2002 to 2008. With the old algorithm (January 2002 to September 2005), open decortication was the procedure of choice, and videothoracoscopy was only performed in certain sporadic mid-stage cases. With the new algorithm (October 2005 to December 2008), videothoracoscopy became the first-line treatment option, whereas open decortication was only performed in patients with a thick pleural peel (>2 cm) observed by chest scan. The patients were divided into an old algorithm (n = 93) and new algorithm (n = 113) group and compared. The main outcome variables assessed included treatment failure (pleural space reintervention or death up to 60 days after medical discharge) and the occurrence of complications. RESULTS: Videothoracoscopy and open decortication were performed in 13 and 80 patients from the old algorithm group and in 81 and 32 patients from the new algorithm group, respectively (p<0.01). The patients in the new algorithm group were older (41±1 vs. 46.3±16.7 years, p = 0.014) and had higher Charlson Comorbidity Index scores [0(0-3) vs. 2(0-4), p = 0.032]. The occurrence of treatment failure was similar in both groups (19.35% vs. 24.77%, p = 0.35), although the complication rate was lower in the new algorithm group (48.3% vs. 33.6%, p = 0.04). CONCLUSIONS: The wider use of videothoracoscopy in pleural empyema treatment was associated with fewer complications and unaltered rates of mortality and reoperation even though more severely ill patients were subjected to videothoracoscopic surgery. PMID:22760892

  14. Improved nucleosome-positioning algorithm iNPS for accurate nucleosome positioning from sequencing data.

    PubMed

    Chen, Weizhong; Liu, Yi; Zhu, Shanshan; Green, Christopher D; Wei, Gang; Han, Jing-Dong Jackie

    2014-01-01

    Accurate determination of genome-wide nucleosome positioning can provide important insights into global gene regulation. Here, we describe the development of an improved nucleosome-positioning algorithm-iNPS-which achieves significantly better performance than the widely used NPS package. By determining nucleosome boundaries more precisely and merging or separating shoulder peaks based on local MNase-seq signals, iNPS can unambiguously detect 60% more nucleosomes. The detected nucleosomes display better nucleosome 'widths' and neighbouring centre-centre distance distributions, giving rise to sharper patterns and better phasing of average nucleosome profiles and higher consistency between independent data subsets. In addition to its unique advantage in classifying nucleosomes by shape to reveal their different biological properties, iNPS also achieves higher significance and lower false positive rates than previously published methods. The application of iNPS to T-cell activation data demonstrates a greater ability to facilitate detection of nucleosome repositioning, uncovering additional biological features underlying the activation process. PMID:25233085

  15. Rehearsal significantly improves immediate and delayed recall on the Rey Auditory Verbal Learning Test.

    PubMed

    Hessen, Erik

    2011-10-01

    A repeated observation during memory assessment with the Rey Auditory Verbal Learning Test (RAVLT) is that patients who spontaneously employ a memory rehearsal strategy by repeating the word list more than once achieve better scores than patients who only repeat the word list once. This observation led to concern about the ability of the standard test procedure of RAVLT and similar tests in eliciting the best possible recall scores. The purpose of the present study was to test the hypothesis that a rehearsal recall strategy of repeating the word list more than once would result in improved scores of recall on the RAVLT. We report on differences in outcome after standard administration and after experimental administration on Immediate and Delayed Recall measures from the RAVLT of 50 patients. The experimental administration resulted in significantly improved scores for all the variables employed. Additionally, it was found that patients who failed effort screening showed significantly poorer improvement on Delayed Recall compared with those who passed the effort screening. The general clear improvement both in raw scores and T-scores demonstrates that recall performance can be significantly influenced by the strategy of the patient or by small variations in instructions by the examiner. PMID:22074064

  16. Group mindfulness-based therapy significantly improves sexual desire in women.

    PubMed

    Brotto, Lori A; Basson, Rosemary

    2014-06-01

    At least a third of women across reproductive ages experience low sexual desire and impaired arousal. There is increasing evidence that mindfulness, defined as non-judgmental present moment awareness, may improve women's sexual functioning. The goal of this study was to test the effectiveness of mindfulness-based therapy, either immediately or after a 3-month waiting period, in women seeking treatment for low sexual desire and arousal. Women participated in four 90-min group sessions that included mindfulness meditation, cognitive therapy, and education. A total of 117 women were assigned to either the immediate treatment (n = 68, mean age 40.8 yrs) or delayed treatment (n = 49, mean age 42.2 yrs) group, in which women had two pre-treatment baseline assessments followed by treatment. A total of 95 women completed assessments through to the 6-month follow-up period. Compared to the delayed treatment control group, treatment significantly improved sexual desire, sexual arousal, lubrication, sexual satisfaction, and overall sexual functioning. Sex-related distress significantly decreased in both conditions, regardless of treatment, as did orgasmic difficulties and depressive symptoms. Increases in mindfulness and a reduction in depressive symptoms predicted improvements in sexual desire. Mindfulness-based group therapy significantly improved sexual desire and other indices of sexual response, and should be considered in the treatment of women's sexual dysfunction. PMID:24814472

  17. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  18. Combined image-processing algorithms for improved optical coherence tomography of prostate nerves

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Weldon, Thomas P.; Fiddy, Michael A.; Fried, Nathaniel M.

    2010-07-01

    Cavernous nerves course along the surface of the prostate gland and are responsible for erectile function. These nerves are at risk of injury during surgical removal of a cancerous prostate gland. In this work, a combination of segmentation, denoising, and edge detection algorithms are applied to time-domain optical coherence tomography (OCT) images of rat prostate to improve identification of cavernous nerves. First, OCT images of the prostate are segmented to differentiate the cavernous nerves from the prostate gland. Then, a locally adaptive denoising algorithm using a dual-tree complex wavelet transform is applied to reduce speckle noise. Finally, edge detection is used to provide deeper imaging of the prostate gland. Combined application of these three algorithms results in improved signal-to-noise ratio, imaging depth, and automatic identification of the cavernous nerves, which may be of direct benefit for use in laparoscopic and robotic nerve-sparing prostate cancer surgery.

  19. Determination of significance in Ecological Impact Assessment: Past change, current practice and future improvements

    SciTech Connect

    Briggs, Sam; Hudson, Malcolm D.

    2013-01-15

    Ecological Impact Assessment (EcIA) is an important tool for conservation and achieving sustainable development. 'Significant' impacts are those which disturb or alter the environment to a measurable degree. Significance is a crucial part of EcIA, our understanding of the concept in practice is vital if it is to be effective as a tool. This study employed three methods to assess how the determination of significance has changed through time, what current practice is, and what would lead to future improvements. Three data streams were collected: interviews with expert stakeholders, a review of 30 Environmental Statements and a broad-scale survey of the United Kingdom Institute of Ecology and Environmental Management (IEEM) members. The approach taken in the determination of significance has become more standardised and subjectivity has become constrained through a transparent framework. This has largely been driven by a set of guidelines produced by IEEM in 2006. The significance of impacts is now more clearly justified and the accuracy with which it is determined has improved. However, there are limitations to accuracy and effectiveness of the determination of significance. These are the quality of baseline survey data, our scientific understanding of ecological processes and the lack of monitoring and feedback of results. These in turn are restricted by the limited resources available in consultancies. The most notable recommendations for future practice are the implementation of monitoring and the publication of feedback, the creation of a central database for baseline survey data and the streamlining of guidance. - Highlights: Black-Right-Pointing-Pointer The assessment of significance has changed markedly through time. Black-Right-Pointing-Pointer The IEEM guidelines have driven a standardisation of practice. Black-Right-Pointing-Pointer Currently limited by quality of baseline data and scientific understanding. Black-Right-Pointing-Pointer Monitoring and

  20. An improved bi-level algorithm for partitioning dynamic grid hierarchies.

    SciTech Connect

    Deiterding, Ralf (California Institute of Technology, Pasadena, CA); Johansson, Henrik (Uppsala University, Uppsala, Sweden); Steensland, Johan; Ray, Jaideep

    2006-05-01

    Structured adaptive mesh refinement methods are being widely used for computer simulations of various physical phenomena. Parallel implementations potentially offer realistic simulations of complex three-dimensional applications. But achieving good scalability for large-scale applications is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Designed on sound SAMR principles, Nature+Fable is a hybrid, dedicated SAMR partitioning tool that brings together the advantages of both domain-based and patch-based techniques while avoiding their drawbacks. But the original bi-level partitioning approach in Nature+Fable is insufficient as it for realistic applications regards frequently occurring bi-levels as ''impossible'' and fails. This document describes an improved bi-level partitioning algorithm that successfully copes with all possible bi-levels. The improved algorithm uses the original approach side-by-side with a new, complementing approach. By using a new, customized classification method, the improved algorithm switches automatically between the two approaches. This document describes the algorithms, discusses implementation issues, and presents experimental results. The improved version of Nature+Fable was found to be able to handle realistic applications and also to generate less imbalances, similar box count, but more communication as compared to the native, domain-based partitioner in the SAMR framework AMROC.

  1. Integrating soil information into canopy sensor algorithms for improved corn nitrogen rate recommendation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...

  2. Improved Algorithms for Identifying Spelling and Word Order Errors in Student Responses.

    ERIC Educational Resources Information Center

    Hart, Robert S.

    The report describes improved algorithms within a computer program for identifying spelling and word order errors in student responses. A "markup analysis" compares a student's response string to an author-specified model string and generates a graphical error markup that indicates spelling, capitalization, and accent errors, extra or missing…

  3. An improved bi-level algorithm for partitioning dynamic structured grid hierarchies.

    SciTech Connect

    Deiterding, Ralf; Steensland, Johan; Ray, Jaideep

    2006-02-01

    Structured adaptive mesh refinement methods are being widely used for computer simulations of various physical phenomena. Parallel implementations potentially offer realistic simulations of complex three-dimensional applications. But achieving good scalability for large-scale applications is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Designed on sound SAMR principles, Nature+Fable is a hybrid, dedicated SAMR partitioning tool that brings together the advantages of both domain-based and patch-based techniques while avoiding their drawbacks. But the original bi-level partitioning approach in Nature+Fable is insufficient as it for realistic applications regards frequently occurring bi-levels as 'impossible' and fails. This document describes an improved bi-level partitioning algorithm that successfully copes with all possible hi-levels. The improved algorithm uses the original approach side-by-side with a new, complementing approach. By using a new, customized classification method, the improved algorithm switches automatically between the two approaches. This document describes the algorithms, discusses implementation issues, and presents experimental results. The improved version of Nature+Fable was found to be able to handle realistic applications and also to generate less imbalances, similar box count, but more communication as compared to the native, domain-based partitioner in the SAMR framework AMROC.

  4. Communication: Proper treatment of classically forbidden electronic transitions significantly improves detailed balance in surface hopping.

    PubMed

    Sifain, Andrew E; Wang, Linjun; Prezhdo, Oleg V

    2016-06-01

    Surface hopping is the most popular method for nonadiabatic molecular dynamics. Many have reported that it does not rigorously attain detailed balance at thermal equilibrium, but does so approximately. We show that convergence to the Boltzmann populations is significantly improved when the nuclear velocity is reversed after a classically forbidden hop. The proposed prescription significantly reduces the total number of classically forbidden hops encountered along a trajectory, suggesting that some randomization in nuclear velocity is needed when classically forbidden hops constitute a large fraction of attempted hops. Our results are verified computationally using two- and three-level quantum subsystems, coupled to a classical bath undergoing Langevin dynamics. PMID:27276938

  5. Significantly Improved Mechanical Properties of Bi- Sn Solder Alloys by Ag- Doping

    NASA Astrophysics Data System (ADS)

    McCormack, M.; Chen, H. S.; Kammlott, G. W.; Jin, S.

    1997-08-01

    The addition of small amounts of Ag (less than ~;0.5 wt. %) is found to significantly improve the ductility of the binary Bi-Sn eutectic solder. The ductility improvement, more than a threefold increase in tensile elongation, is observed even at a relatively high strain rate (0.01 s-1). As the Bi-Sn binary eutectic alloy tends to fail catastrophically by brittle fracture at high strain rates, the reduced strain-rate sensitivity in the Ag-containing alloy is beneficial for improving solder reliability on sudden impacting as might be encountered during device assembly, shipping, or thermal shock/cycling. The observed increase in alloy ductility by Ag additions is attributed to a substantial refinement of the solidification microstructure.

  6. Improved progressive TIN densification filtering algorithm for airborne LiDAR data in forested areas

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoqian; Guo, Qinghua; Su, Yanjun; Xue, Baolin

    2016-07-01

    Filtering of light detection and ranging (LiDAR) data into the ground and non-ground points is a fundamental step in processing raw airborne LiDAR data. This paper proposes an improved progressive triangulated irregular network (TIN) densification (IPTD) filtering algorithm that can cope with a variety of forested landscapes, particularly both topographically and environmentally complex regions. The IPTD filtering algorithm consists of three steps: (1) acquiring potential ground seed points using the morphological method; (2) obtaining accurate ground seed points; and (3) building a TIN-based model and iteratively densifying TIN. The IPTD filtering algorithm was tested in 15 forested sites with various terrains (i.e., elevation and slope) and vegetation conditions (i.e., canopy cover and tree height), and was compared with seven other commonly used filtering algorithms (including morphology-based, slope-based, and interpolation-based filtering algorithms). Results show that the IPTD achieves the highest filtering accuracy for nine of the 15 sites. In general, it outperforms the other filtering algorithms, yielding the lowest average total error of 3.15% and the highest average kappa coefficient of 89.53%.

  7. Combining spatial and spectral information to improve crop/weed discrimination algorithms

    NASA Astrophysics Data System (ADS)

    Yan, L.; Jones, G.; Villette, S.; Paoli, J. N.; Gée, C.

    2012-01-01

    Reduction of herbicide spraying is an important key to environmentally and economically improve weed management. To achieve this, remote sensors such as imaging systems are commonly used to detect weed plants. We developed spatial algorithms that detect the crop rows to discriminate crop from weeds. These algorithms have been thoroughly tested and provide robust and accurate results without learning process but their detection is limited to inter-row areas. Crop/Weed discrimination using spectral information is able to detect intra-row weeds but generally needs a prior learning process. We propose a method based on spatial and spectral information to enhance the discrimination and overcome the limitations of both algorithms. The classification from the spatial algorithm is used to build the training set for the spectral discrimination method. With this approach we are able to improve the range of weed detection in the entire field (inter and intra-row). To test the efficiency of these algorithms, a relevant database of virtual images issued from SimAField model has been used and combined to LOPEX93 spectral database. The developed method based is evaluated and compared with the initial method in this paper and shows an important enhancement from 86% of weed detection to more than 95%.

  8. Simple and Efficient Algorithm for Improving the MDL Estimator of the Number of Sources

    PubMed Central

    Guimarães, Dayan A.; de Souza, Rausley A. A.

    2014-01-01

    We propose a simple algorithm for improving the MDL (minimum description length) estimator of the number of sources of signals impinging on multiple sensors. The algorithm is based on the norms of vectors whose elements are the normalized and nonlinearly scaled eigenvalues of the received signal covariance matrix and the corresponding normalized indexes. Such norms are used to discriminate the largest eigenvalues from the remaining ones, thus allowing for the estimation of the number of sources. The MDL estimate is used as the input data of the algorithm. Numerical results unveil that the so-called norm-based improved MDL (iMDL) algorithm can achieve performances that are better than those achieved by the MDL estimator alone. Comparisons are also made with the well-known AIC (Akaike information criterion) estimator and with a recently-proposed estimator based on the random matrix theory (RMT). It is shown that our algorithm can also outperform the AIC and the RMT-based estimator in some situations. PMID:25330050

  9. An improved algorithm for the automatic detection and characterization of slow eye movements.

    PubMed

    Cona, Filippo; Pizza, Fabio; Provini, Federica; Magosso, Elisa

    2014-07-01

    Slow eye movements (SEMs) are typical of drowsy wakefulness and light sleep. SEMs still lack of systematic physical characterization. We present a new algorithm, which substantially improves our previous one, for the automatic detection of SEMs from the electro-oculogram (EOG) and extraction of SEMs physical parameters. The algorithm utilizes discrete wavelet decomposition of the EOG to implement a Bayes classifier that identifies intervals of slow ocular activity; each slow activity interval is segmented into single SEMs via a template matching method. Parameters of amplitude, duration, velocity are automatically extracted from each detected SEM. The algorithm was trained and validated on sleep onsets and offsets of 20 EOG recordings visually inspected by an expert. Performances were assessed in terms of correctly identified slow activity epochs (sensitivity: 85.12%; specificity: 82.81%), correctly segmented single SEMs (89.08%), and time misalignment (0.49 s) between the automatically and visually identified SEMs. The algorithm proved reliable even in whole sleep (sensitivity: 83.40%; specificity: 72.08% in identifying slow activity epochs; correctly segmented SEMs: 93.24%; time misalignment: 0.49 s). The algorithm, being able to objectively characterize single SEMs, may be a valuable tool to improve knowledge of normal and pathological sleep. PMID:24768562

  10. Improved mean shift algorithm based on a dual patterns merging Robinson guard filter

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Chen, Qian; Miao, Zhuang; Zhao, Tie-kun; Chen, Hai-xin

    2013-09-01

    Mean shift, which is widely used in many target tracking systems, is a very effective algorithm to track the target. But the traditional mean shift tracking algorithm is limited to track an infrared small target. In infrared prewarning and tracking systems, the traditional mean shift tracking algorithm cannot achieve accurate tracking result due to that the target is weakened and submerged in the background noise. So in this paper, a compositive mean shift algorithm is put forward. In this algorithm, firstly on the basis of background suppression and division, noise is suppressed by an extraordinary Robinson Guard Filter. This paper adopts a dual patterns merging Robinson Guard Filter which is different from the traditional Robinson Guard Filter. According to the point target's anisotropic singularity in space, this dual patterns merging Robinson Guard Filter can divide the direction further and detect singularity accurately in different directions in order to obtain better effect. The dual patterns merging Robinson Guard Filter's improvement is that it adopts the horizontal and vertical direction window and the diagonal direction window whose protective belt width are both two at the same time to increase the probability of point target detection. The filter separately detects the two directions and merges the results in order to boost the effect of keeping back the details of the target. At the same time, it can also boost the effect of background suppression as much as possible and reduce the false alarm rate. At last the system can achieve ideal detection performance. After filtering, an image in which the point target and the background are distinguished is acquired. Then in the mean shift algorithm, we use the acquired image for target tracking. The results of experiment show that this improved mean shift algorithm can reduce failure probability of prewarning and track infrared small targets steadily and accurately.

  11. Comparison of improved operating parameters of five different wavelength LEDs for significantly brighter illumination

    NASA Astrophysics Data System (ADS)

    Mueller, Eduard K.; Lee, Susanne M.; Van de Workeen, Brian C.; Mueller, Otward M.

    2001-05-01

    Although light-emitting diodes exhibit much higher efficiencies and greatly reduced power consumption compared to incandescent light sources, the use of LEDs in lighting applications is limited by their smaller size and subsequently lower light output. However, it has been found that these parameters can be increased significantly by cooling the diodes to cryogenic temperatures. This may make their use feasible for several applications requiring more efficient and brighter illumination for much less cost. In this paper, we compare the temperature-dependent behavior of five commercially available LEDs of different wavelengths down to liquid nitrogen temperatures. It was found that three AlInGaP diodes (red, yellow, and green) demonstrated significant operating improvements. The performance of InGaN-based blue LEDs declined at low temperatures, and because most white LEDs are simply blue LEDs coated with YAG, these exhibited similar behavior. However, the three AlInGaP LEDs demonstrated at least an order of magnitude improvement in illuminance, absolute intensity, and maximum operating current. The green LEDs showed the largest improvement factors, while the yellow LEDs produced the brightest illumination at low temperatures. The emissions of all five LEDs shifted to shorter wavelengths at low temperatures. This is significant in terms of lighting applications since the low-temperature AlInGaP diodes emitted more visible spectra.

  12. Improved Surface and Tropospheric Temperatures Determined Using Only Shortwave Channels: The AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2011-01-01

    The Goddard DISC has generated products derived from AIRS/AMSU-A observations, starting from September 2002 when the AIRS instrument became stable, using the AIRS Science Team Version-5 retrieval algorithm. The AIRS Science Team Version-6 retrieval algorithm will be finalized in September 2011. This paper describes some of the significant improvements contained in the Version-6 retrieval algorithm, compared to that used in Version-5, with an emphasis on the improvement of atmospheric temperature profiles, ocean and land surface skin temperatures, and ocean and land surface spectral emissivities. AIRS contains 2378 spectral channels covering portions of the spectral region 650 cm(sup -1) (15.38 micrometers) - 2665 cm(sup -1) (3.752 micrometers). These spectral regions contain significant absorption features from two CO2 absorption bands, the 15 micrometers (longwave) CO2 band, and the 4.3 micrometers (shortwave) CO2 absorption band. There are also two atmospheric window regions, the 12 micrometer - 8 micrometer (longwave) window, and the 4.17 micrometer - 3.75 micrometer (shortwave) window. Historically, determination of surface and atmospheric temperatures from satellite observations was performed using primarily observations in the longwave window and CO2 absorption regions. According to cloud clearing theory, more accurate soundings of both surface skin and atmospheric temperatures can be obtained under partial cloud cover conditions if one uses observations in longwave channels to determine coefficients which generate cloud cleared radiances R(sup ^)(sub i) for all channels, and uses R(sup ^)(sub i) only from shortwave channels in the determination of surface and atmospheric temperatures. This procedure is now being used in the AIRS Version-6 Retrieval Algorithm. Results are presented for both daytime and nighttime conditions showing improved Version-6 surface and atmospheric soundings under partial cloud cover.

  13. Bioinspired Evolutionary Algorithm Based for Improving Network Coverage in Wireless Sensor Networks

    PubMed Central

    Abbasi, Mohammadjavad; Bin Abd Latiff, Muhammad Shafie

    2014-01-01

    Wireless sensor networks (WSNs) include sensor nodes in which each node is able to monitor the physical area and send collected information to the base station for further analysis. The important key of WSNs is detection and coverage of target area which is provided by random deployment. This paper reviews and addresses various area detection and coverage problems in sensor network. This paper organizes many scenarios for applying sensor node movement for improving network coverage based on bioinspired evolutionary algorithm and explains the concern and objective of controlling sensor node coverage. We discuss area coverage and target detection model by evolutionary algorithm. PMID:24693247

  14. An Improved Performance Frequency Estimation Algorithm for Passive Wireless SAW Resonant Sensors

    PubMed Central

    Liu, Boquan; Zhang, Chenrui; Ji, Xiaojun; Chen, Jing; Han, Tao

    2014-01-01

    Passive wireless surface acoustic wave (SAW) resonant sensors are suitable for applications in harsh environments. The traditional SAW resonant sensor system requires, however, Fourier transformation (FT) which has a resolution restriction and decreases the accuracy. In order to improve the accuracy and resolution of the measurement, the singular value decomposition (SVD)-based frequency estimation algorithm is applied for wireless SAW resonant sensor responses, which is a combination of a single tone undamped and damped sinusoid signal with the same frequency. Compared with the FT algorithm, the accuracy and the resolution of the method used in the self-developed wireless SAW resonant sensor system are validated. PMID:25429410

  15. An improved algorithm for retrieving chlorophyll-a from the Yellow River Estuary using MODIS imagery.

    PubMed

    Chen, Jun; Quan, Wenting

    2013-03-01

    In this study, an improved Moderate-Resolution Imaging Spectroradiometer (MODIS) ocean chlorophyll-a (chla) 3 model (IOC3M) algorithm was developed as a substitute for the MODIS global chla concentration estimation algorithm, OC3M, to estimate chla concentrations in waters with high suspended sediment concentrations, such as the Yellow River Estuary, China. The IOC3M algorithm uses [Formula: see text] to substitute for switching the two-band ratio of max [R (rs) (443 nm), R (rs) (488 nm)]/R (rs) (551 nm) of the OC3M algorithm. In the IOC3M algorithm, the absorption coefficient of chla can be isolated as long as reasonable bands are selected. The performance of IOC3M and OC3M was calibrated and validated using a bio-optical data set composed of spectral upwelling radiance measurements and chla concentrations collected during three independent cruises in the Yellow River Estuary in September of 2009. It was found that the optimal bands of the IOC3M algorithm were λ(1) = 443 nm, λ(2) = 748 nm, λ(3) = 551 nm, and λ(4) = 870 nm. By comparison, the IOC3M algorithm produces superior performance to the OC3M algorithm. Using the IOC3M algorithm in estimating chla concentrations from the Yellow River Estuary decreases 1.03 mg/m(3) uncertainty from the OC3M algorithm. Additionally, the chla concentration estimated from MODIS data reveals that more than 90 % of the water in the Yellow River Estuary has a chla concentration lower than 5.0 mg/m(3). The averaged chla concentration is close to the in situ measurements. Although the case study presented herein is unique, the modeling procedures employed by the IOC3M algorithm can be useful in remote sensing to estimate the chla concentrations of similar aquatic environments. PMID:22707149

  16. An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation

    NASA Technical Reports Server (NTRS)

    Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.

    2000-01-01

    A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.

  17. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Huibin, Liu; Jun, Zhang

    2016-04-01

    Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network.

  18. Abdomen disease diagnosis in CT images using flexiscale curvelet transform and improved genetic algorithm.

    PubMed

    Sethi, Gaurav; Saini, B S

    2015-12-01

    This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases. PMID:26499377

  19. Improved radar data processing algorithms for quantitative rainfall estimation in real time.

    PubMed

    Krämer, S; Verworn, H R

    2009-01-01

    This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time. PMID:19587415

  20. PXD101 significantly improves nuclear reprogramming and the in vitro developmental competence of porcine SCNT embryos

    SciTech Connect

    Jin, Jun-Xue; Kang, Jin-Dan; Li, Suo; Jin, Long; Zhu, Hai-Ying; Guo, Qing; Gao, Qing-Shan; Yan, Chang-Guo; Yin, Xi-Jun

    2015-01-02

    Highlights: • First explored that the effects of PXD101 on the development of SCNT embryos in vitro. • 0.5 μM PXD101 treated for 24 h improved the development of porcine SCNT embryos. • Level of AcH3K9 was significantly higher than control group at early stages. - Abstract: In this study, we investigated the effects of the histone deacetylase inhibitor PXD101 (belinostat) on the preimplantation development of porcine somatic cell nuclear transfer (SCNT) embryos and their expression of the epigenetic markers histone H3 acetylated at lysine 9 (AcH3K9). We compared the in vitro developmental competence of SCNT embryos treated with various concentrations of PXD101 for 24 h. Treatment with 0.5 μM PXD101 significantly increased the proportion of SCNT embryos that reached the blastocyst stage, in comparison to the control group (23.3% vs. 11.5%, P < 0.05). We tested the in vitro developmental competence of SCNT embryos treated with 0.5 μM PXD101 for various amounts of times following activation. Treatment for 24 h significantly improved the development of porcine SCNT embryos, with a significantly higher proportion of embryos reaching the blastocyst stage in comparison to the control group (25.7% vs. 10.6%, P < 0.05). PXD101-treated SCNT embryos were transferred into two surrogate sows, one of whom became pregnant and four fetuses developed. PXD101 treatment significantly increased the fluorescence intensity of immunostaining for AcH3K9 in embryos at the pseudo-pronuclear and 2-cell stages. At these stages, the fluorescence intensities of immunostaining for AcH3K9 were significantly higher in PXD101-treated embryos than in control untreated embryos. In conclusion, this study demonstrates that PXD101 can significantly improve the in vitro and in vivo developmental competence of porcine SCNT embryos and can enhance their nuclear reprogramming.

  1. Active Hemovigilance Significantly Improves Reporting of Acute Non-infectious Adverse Reactions to Blood Transfusion.

    PubMed

    Agnihotri, Naveen; Agnihotri, Ajju

    2016-09-01

    One of the key purposes of a hemovigilance program is to improve reporting of transfusion related adverse events and subsequent data-driven improvement in blood transfusion (BT) practices. We conducted a study over 3 years to assess the impact of healthcare worker training and an active feedback programme on reporting of adverse reactions to BTs. All hospitalized patients who required a BT were included in the study. Healthcare workers involved in BT to patients were sensitized and trained in adverse reaction reporting by conducting training sessions and meetings. All the transfused patients were 'actively' monitored for any acute adverse reaction by using a uniquely coded blood issue form. A total of 18,914 blood components transfused to 5785 different patients resulted in 61 adverse reaction episodes. This incidence of 0.32 % in our study was found to be significantly higher (p < 0.005) than that reported from the same region in the past. Red blood cell units were the most frequently transfused component and thus most commonly involved in an adverse reaction (42.6 %), however apheresis platelets had the highest chance of reaction per unit transfused (0.66 %). There was no mortality associated with the BT during the study period. An active surveillance program significantly improves reporting and management of adverse reactions to BTs. PMID:27429527

  2. Storage of human pancreatic digest in University of Wisconsin solution significantly improves subsequent islet purification.

    PubMed

    Robertson, G S; Chadwick, D; Contractor, H; Rose, S; Chamberlain, R; Clayton, H; Bell, P R; James, R F; London, N J

    1992-09-01

    Density-gradient purification of human pancreatic islets from the collagenase-digested pancreas relies on the exocrine tissue being denser than the islets. Cold storage of the pancreas before and after digestion causes cell swelling, which can decrease the density of pancreatic exocrine tissue and adversely affect subsequent purification. Using 14 human pancreata (seven perfused in situ with hyperosmolar citrate (HOC) and seven with University of Wisconsin solution (UW)), it is shown that storage of the pancreatic digest in UW significantly increases the density of pancreatic exocrine tissue compared with storage in minimal essential medium (MEM) (P = 0.009). This results in an improvement in islet purity (P = 0.036) for HOC- but not UW-perfused pancreata. Storage in UW for 1 h not only prevented the deterioration that occurred in MEM, but resulted in an improvement in islet purity for five of the seven HOC-perfused pancreata. Most pancreata in the UK are perfused with HOC, but storage of the digest in UW results in significantly better islet purity and, when islets cannot be purified immediately, a period of storage will often improve separation and allow islets to be purified. PMID:1422750

  3. Improved particle swarm optimization algorithm for android medical care IOT using modified parameters.

    PubMed

    Sung, Wen-Tsai; Chiang, Yen-Chun

    2012-12-01

    This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services. PMID:22492176

  4. An improved image matching algorithm based on SURF and Delaunay TIN

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-ming; Cheng, Peng-gen; Chen, Xiao-yong; Zheng, Shou-zhu

    2015-12-01

    Image matching is one of the key technologies in the image processing. In order to increase its efficiency and precision, a new method for image matching which based on the improved SURF and Delaunay-TIN is proposed in this paper. Based on the original SURF algorithm, three constraint conditions, color invariant model, Delaunay-TIN, triangle similarity function and photography invariant are added into the original SURF model. With the proposed algorithm, the image color information is effectively retained and the erroneous matching rate of features is largely reduced. The experimental results shows that this proposed method has the characteristics of higher matching speed, uniform distribution of feature points to be matched, and higher correct matching rate than the original algorithm does.

  5. An Improved Clustering Algorithm of Tunnel Monitoring Data for Cloud Computing

    PubMed Central

    Zhong, Luo; Tang, KunHao; Li, Lin; Yang, Guang; Ye, JingJing

    2014-01-01

    With the rapid development of urban construction, the number of urban tunnels is increasing and the data they produce become more and more complex. It results in the fact that the traditional clustering algorithm cannot handle the mass data of the tunnel. To solve this problem, an improved parallel clustering algorithm based on k-means has been proposed. It is a clustering algorithm using the MapReduce within cloud computing that deals with data. It not only has the advantage of being used to deal with mass data but also is more efficient. Moreover, it is able to compute the average dissimilarity degree of each cluster in order to clean the abnormal data. PMID:24982971

  6. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms.

    PubMed

    Wang, C L

    2016-05-01

    Three high-resolution positioning methods based on the FluoroBancroft linear-algebraic method [S. B. Andersson, Opt. Express 16, 18714 (2008)] are proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function, the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. After taking the super-Poissonian photon noise into account, the proposed algorithms give an average of 0.03-0.08 pixel position error much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. These improvements will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis. PMID:27250410

  7. Retrieval of particle size distribution from aerosol optical thickness using an improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Mao, Jiandong; Li, Jinxuan

    2015-10-01

    Particle size distribution is essential for describing direct and indirect radiation of aerosols. Because the relationship between the aerosol size distribution and optical thickness (AOT) is an ill-posed Fredholm integral equation of the first type, the traditional techniques for determining such size distributions, such as the Phillips-Twomey regularization method, are often ambiguous. Here, we use an approach based on an improved particle swarm optimization algorithm (IPSO) to retrieve aerosol size distribution. Using AOT data measured by a CE318 sun photometer in Yinchuan, we compared the aerosol size distributions retrieved using a simple genetic algorithm, a basic particle swarm optimization algorithm and the IPSO. Aerosol size distributions for different weather conditions were analyzed, including sunny, dusty and hazy conditions. Our results show that the IPSO-based inversion method retrieved aerosol size distributions under all weather conditions, showing great potential for similar size distribution inversions.

  8. Experimental verification of an interpolation algorithm for improved estimates of animal position.

    PubMed

    Schell, Chad; Jaffe, Jules S

    2004-07-01

    This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied "ex post facto" to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration. PMID:15295985

  9. Experimental verification of an interpolation algorithm for improved estimates of animal position

    NASA Astrophysics Data System (ADS)

    Schell, Chad; Jaffe, Jules S.

    2004-07-01

    This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied ``ex post facto'' to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.

  10. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  11. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  12. Application of Innovative Hemocytometric Parameters and Algorithms for Improvement of Microcytic Anemia Discrimination.

    PubMed

    Schoorl, Margreet; Schoorl, Marianne; van Pelt, Johannes; Bartels, Piet C M

    2015-06-01

    Hemocytometric parameters like red blood cell (RBC) count, mean red blood cell volume (MCV), reticulocyte count, red blood cell distribution width (RDW-SD) and zinc protoporphyrin (ZPP) are frequently established for discrimination between iron-deficiency anemia and thalassemia in subjects with microcytic erythropoiesis. However, no single marker or combination of tests is optimal for discrimination between iron-deficiency anemia and thalassemia. This is the reason why many algorithms have been introduced. However, application of conventional algorithms, only resulted in appropriate classification of 30-40% of subjects. In this mini-review the efficacy of innovative hematological parameters for detection of alterations in RBCs has been considered. It refers to parameters concerning hemoglobinization of RBCs and reticulocytes and the percentages microcytic and hypochromic RBCs, for discrimination between subjects with iron-deficiency anemia (IDA) or thalassemia as well as a combination of both. A new discriminating tool including the above mentioned parameters was developed, based on two precondition steps and discriminating algorithms. The percentage microcytic RBCs is considered in the first precondition step. MCV, RDW-SD and RBC count are applied in the second precondition step. Subsequently, new algorithms, including conventional as well as innovative hematological parameters, were assessed for subgroups with microcytic erythropoiesis. The new algorithms for IDA discrimination yielded results for sensitivity of 79%, specificity of 97%, positive and negative predictive values of 74% and 98% respectively. The algorithms for β-thalassemia discrimination revealed similar results (74%, 98%, 75% and 99% respectively). We advocate that innovative algorithms, including parameters reflecting hemoglobinization of RBCs and reticulocytes, are integrated in an easily accessible software program linked to the hematology equipment to improve the discrimination between IDA and

  13. Application of Innovative Hemocytometric Parameters and Algorithms for Improvement of Microcytic Anemia Discrimination

    PubMed Central

    Schoorl, Margreet; Schoorl, Marianne; van Pelt, Johannes; Bartels, Piet C.M.

    2015-01-01

    Hemocytometric parameters like red blood cell (RBC) count, mean red blood cell volume (MCV), reticulocyte count, red blood cell distribution width (RDW-SD) and zinc protoporphyrin (ZPP) are frequently established for discrimination between iron-deficiency anemia and thalassemia in subjects with microcytic erythropoiesis. However, no single marker or combination of tests is optimal for discrimination between iron-deficiency anemia and thalassemia. This is the reason why many algorithms have been introduced. However, application of conventional algorithms, only resulted in appropriate classification of 30-40% of subjects. In this mini-review the efficacy of innovative hematological parameters for detection of alterations in RBCs has been considered. It refers to parameters concerning hemoglobinization of RBCs and reticulocytes and the percentages microcytic and hypochromic RBCs, for discrimination between subjects with iron-deficiency anemia (IDA) or thalassemia as well as a combination of both. A new discriminating tool including the above mentioned parameters was developed, based on two precondition steps and discriminating algorithms. The percentage microcytic RBCs is considered in the first precondition step. MCV, RDW-SD and RBC count are applied in the second precondition step. Subsequently, new algorithms, including conventional as well as innovative hematological parameters, were assessed for subgroups with microcytic erythropoiesis. The new algorithms for IDA discrimination yielded results for sensitivity of 79%, specificity of 97%, positive and negative predictive values of 74% and 98% respectively. The algorithms for β-thalassemia discrimination revealed similar results (74%, 98%, 75% and 99% respectively). We advocate that innovative algorithms, including parameters reflecting hemoglobinization of RBCs and reticulocytes, are integrated in an easily accessible software program linked to the hematology equipment to improve the discrimination between IDA and

  14. Improving Significant Wave Height detection for Coastal Satellite Altimetry: validation in the German Bight.

    NASA Astrophysics Data System (ADS)

    Passaro, Marcello; Benveniste, Jérôme; Cipollini, Paolo; Fenoglio-Marc, Luciana

    For more than two decades, it has been possible to map the Significant Wave Height (SWH) globally through Satellite Altimetry. SWH estimation is possible because the shape of an altimetric waveform, which usually presents a sharp leading edge and a slowly decaying trailing edge, depends on the sea state: in particular, the higher the sea state, the longer the rising time of the leading edge. The algorithm for SWH also depends on the width of the point target response (PTR) function, which is usually approximated by a constant value that contributes to the rising time. Particularly challenging for SWH detection are coastal data and low sea states. The first are usually flagged as unreliable due to land and calm water interference in the altimeter footprint; the second are characterized by an extremely sharp leading edge that is consequently poorly sampled in the digitalized waveform. ALES, a new algorithm for reprocessing altimetric waveforms, has recently been validated for sea surface height estimation (Passaro et al. 2014). The aim of this work is to check its validity also for SWH estimation in a particularly challenging area. The German Bight region presents both low sea state and coastal issues and is particularly suitable for validation, thanks to the extended network of buoys of the Bundesamt für Seeschifffahrt und Hydrographie (BSH). In-situ data include open sea, off-shore and coastal sea conditions, respectively at the Helgoland, lighthouse Alte Weser and Westerland locations. Reprocessed data from Envisat, Jason-1 and Jason-2 tracks are validated against those three buoys. The in-situ validation is applied both at the nearest point and at points along-track. The skill metrics is based on bias, standard deviation, slope of regression line, scatter index, number of cycles with correlation larger than 90%. The same metrics is applied to the altimeter data obtained by standard processing and the validation results are compared. Data are evaluated at high

  15. Surgically-Induced Weight Loss Significantly Improves Nonalcoholic Fatty Liver Disease and the Metabolic Syndrome

    PubMed Central

    Mattar, Samer G.; Velcu, Laura M.; Rabinovitz, Mordechai; Demetris, A J.; Krasinskas, A M.; Barinas-Mitchell, Emma; Eid, George M.; Ramanathan, Ramesh; Taylor, Debra S.; Schauer, Philip R.

    2005-01-01

    Objective: To evaluate the effects of surgical weight loss on fatty liver disease in severely obese patients. Summary Background Data: Nonalcoholic fatty liver disease (NAFLD), a spectrum that extends to liver fibrosis and cirrhosis, is rising at an alarming rate. This increase is occurring in conjunction with the rise of severe obesity and is probably mediated in part by metabolic syndrome (MS). Surgical weight loss operations, probably by reversing MS, have been shown to result in improvement in liver histology. Methods: Patients who underwent laparoscopic surgical weight loss operations from March 1999 through August 2004, and who agreed to have an intraoperative liver biopsy followed by at least one postoperative liver biopsy, were included. Results: There were 70 patients who were eligible. All patients underwent laparoscopic operations, the majority being laparoscopic Roux-en-Y gastric bypass. The mean excess body weight loss at time of second biopsy was 59% ± 22% and the time interval between biopsies was 15 ± 9 months. There was a reduction in prevalence of metabolic syndrome, from 70% to 14% (P < 0.001), and a marked improvement in liver steatosis (from 88% to 8%), inflammation (from 23% to 2%), and fibrosis (from 31% to 13%; all P < 0.001). Inflammation and fibrosis resolved in 37% and 20% of patients, respectively, corresponding to improvement of 82% (P < 0.001) in grade and 39% (P < 0.001) in stage of liver disease. Conclusion: Surgical weight loss results in significant improvement of liver morphology in severely obese patients. These beneficial changes may be associated with a significant reduction in the prevalence of the metabolic syndrome. PMID:16192822

  16. An improved algorithm for femoropopliteal artery centerline restoration using prior knowledge of shapes and image space data.

    PubMed

    Rakshe, Tejas; Fleischmann, Dominik; Rosenberg, Jarrett; Roos, Justus E; Straka, Matus; Napel, Sandy

    2008-07-01

    longer than 80 mm (N = 20) were then processed with the IPD algorithm, provided calcifications were found (N = 14). We used the maximum point-wise distance of an interpolated curve from the reference standard as our error metric. The IPD algorithm significantly reduced the average error of the initial PVSP from 2.76 to 1.86 mm (p < 0.01). The error was less than the clinically desirable 3 mm (smallest radius of the femoropopliteal artery) in 13 of 14 occlusions. The IPD algorithm achieved results within the range of the human readers in 11 of 14 cases. We conclude that the additional use of sparse but specific image space information, such as calcified atherosclerotic plaque, can be used to substantially improve the performance of a previously described knowledge-based method to restore the centerlines of femoropopliteal arterial occlusions. PMID:18697561

  17. Improvements to a five-phase ABS algorithm for experimental validation

    NASA Astrophysics Data System (ADS)

    Gerard, Mathieu; Pasillas-Lépine, William; de Vries, Edwin; Verhaegen, Michel

    2012-10-01

    The anti-lock braking system (ABS) is the most important active safety system for passenger cars. Unfortunately, the literature is not really precise about its description, stability and performance. This research improves a five-phase hybrid ABS control algorithm based on wheel deceleration [W. Pasillas-Lépine, Hybrid modeling and limit cycle analysis for a class of five-phase anti-lock brake algorithms, Veh. Syst. Dyn. 44 (2006), pp. 173-188] and validates it on a tyre-in-the-loop laboratory facility. Five relevant effects are modelled so that the simulation matches the reality: oscillations in measurements, wheel acceleration reconstruction, brake pressure dynamics, brake efficiency changes and tyre relaxation. The time delays in measurement and actuation have been identified as the main difficulty for the initial algorithm to work in practice. Three methods are proposed in order to deal with these delays. It is verified that the ABS limit cycles encircle the optimal braking point, without assuming any tyre parameter being a priori known. The ABS algorithm is compared with the commercial algorithm developed by Bosch.

  18. Intestinal-borne dermatoses significantly improved by oral application of Escherichia coli Nissle 1917

    PubMed Central

    Manzhalii, Elina; Hornuss, Daniel; Stremmel, Wolfgang

    2016-01-01

    AIM: To evaluate the effect of oral Escherichia coli (E. coli) Nissle application on the outcome of intestinal-borne dermatoses. METHODS: In a randomized, controlled, non-blinded prospective clinical trial 82 patients with intestinal-borne facial dermatoses characterized by an erythematous papular-pustular rash were screened. At the initiation visit 37 patients entered the experimental arm and 20 patients constituted the control arm. All 57 patients were treated with a vegetarian diet and conventional topical therapy of the dermatoses with ointments containing tetracycline, steroids and retinoids. In the experimental arm patients received a one month therapy with oral E. coli Nissle at a maintenance dose of 2 capsules daily. The experimental group was compared to a non-treatment group only receiving the diet and topical therapy. The primary outcome parameter was improvement of the dermatoses, secondary parameters included life quality and adverse events. In addition the immunological reaction profile (IgA, interleucin-8 and interferon-α) was determined. Furthermore the changes of stool consistency and the microbiota composition over the time of intervention were recorded. RESULTS: Eighty-nine percent of the patients with acne, papular-pustular rosacea and seborrhoic dermatitis responded to E. coli Nissle therapy with significant amelioration or complete recovery in contrast to 56% in the control arm (P < 0.01). Accordingly, in the E. coli Nissle treated patients life quality improved significantly (P < 0.01), and adverse events were not recorded. The clinical improvement was associated with a significant increase of IgA levels to normal values in serum as well as suppression of the proinflammatory cytokine IL-8 (P < 0.01 for both parameters). In the E. coli Nissle treated group a shift towards a protective microbiota with predominance of bifidobacteria and lactobacteria (> 107 CFU/g stool) was observed in 79% and 63% of the patients, respectively (P < 0

  19. A novel algorithm for blind deconvolution applied to the improvement of radiographic images

    NASA Astrophysics Data System (ADS)

    de Almeida, Gevaldo L.; Silvani, Maria Ines

    2013-05-01

    A novel algorithm for blind deconvolution is proposed in this work, which does not require any previous information concerning the image to be unfolded but solely an assumed shape for the PSF. This algorithm, incorporating a Richardson-Lucy unfolding procedure, assesses the overall contrast for each image unfolded with an increasing w, seeking for the highest value. The basic idea behind this concept is that when the spatial resolution of the image is improved, the contrast is improved too, because the pixel overlapping diminishes. Trials with several different images acquired with neutron and gamma-ray transmission radiography have been carried out in order to evaluate the correctness of the proposed algorithm. It has been found that for a steadily increasing w, the overall contrast increases, reaches a maximum and then decreases. The w-value yielding the highest contrast can be achieved after 1 to 3 iterations and further iterations do not affect it. Images deconvoluted with this value, but with a higher number of iterations, exhibit a better quality than their companions deconvoluted with neighbor values, corroborating thus the best w-value. Synthetic images with known resolutions return the same w-values used to degrade them, showing thus the soundness of the proposed algorithm.

  20. Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas

    NASA Technical Reports Server (NTRS)

    Smith, Barbara M.; Bennett, Sean

    1992-01-01

    A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.

  1. Validation and Improvement of CERES Surface Radiation Budget Algorithms: Extension of Dusty and Cloudy Scenes

    NASA Technical Reports Server (NTRS)

    Ramanathan, V.; Inamdar, Anand K.

    2005-01-01

    Our main task was to validate and improve the generation of surface long wave fluxes from the CERES TOA window channel flux measurements. We completed this task successfully for the clear sky fluxes in the presence of aerosols including dust during the first year of the project. The algorithm we developed for CERES was remarkably successful for clear sky fluxes and we have no further tasks that need to be performed past the requested termination date of December 31, 2004. We found that the information contained in the TOA fluxes was not sufficient to improve upon the current CERES algorithm for cloudy sky fluxes. Given this development and given our success in clear sky fluxes, we do not see any reason to continue our validation work beyond what we have completed. Specific details are given.

  2. Multiple R&D Projects Scheduling Optimization with Improved Particle Swarm Algorithm

    PubMed Central

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment. PMID:25032232

  3. Asymmetric optical image encryption based on an improved amplitude-phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Quan, C.; Tay, C. J.

    2016-03-01

    We propose a new asymmetric optical image encryption scheme based on an improved amplitude-phase retrieval algorithm. Using two random phase masks that serve as public encryption keys, an iterative amplitude and phase retrieval process is employed to encode a primary image into a real-valued ciphertext. The private keys generated in the encryption process are used to perform one-way phase modulations. The decryption process is implemented optically using conventional double random phase encoding architecture. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed system. The results illustrate that the computing efficiency of the proposed method is improved and the number of iterations required is much less than that of the cryptosystem based on the Yang-Gu algorithm.

  4. An improved bundle adjustment model and algorithm with novel block matrix partition method

    NASA Astrophysics Data System (ADS)

    Xia, Zemin; Li, Zhongwei; Zhong, Kai

    2014-11-01

    Sparse bundle adjustment is widely applied in computer vision and photogrammetry. However, existing implementation is based on the model of n 3D points projecting onto m different camera imaging planes at m positions, which can't be applied to commonly monocular, binocular or trinocular imaging systems. A novel design and implementation of bundle adjustment algorithm is proposed in this paper, which is based on n 3D points projecting onto the same camera imaging plane at m positions .To improve the performance of the algorithm, a novel sparse block matrix partition method is proposed. Experiments show that the improved bundle adjustment is effective, robust and has a better tolerance to pixel coordinates error.

  5. An improved algorithm for McDowell's analytical model of residual stress

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoxu; Li, Bin; Xiong, Liangshan

    2014-06-01

    The analytical model for two-dimensional elastoplastic rolling/sliding contact proposed by McDowell is an important tool for predicting residual stress in rolling/sliding processes. In application of the model, a problem of low predicting precision near the surface layer of the component is found. According to the volumeconstancy of plastic deformation, an improved algorithm for McDowell's model is proposed in order to improve its predicting accuracy of the surface residual stress. In the algorithm, a relationship between three normal stresses perpendicular to each other at any point within the component is derived, and the relationship is applied to McDowell's model. Meanwhile, an unnecessary hypothesis proposed by McDowell can be eliminated to make the model more reasonable. The simulation results show that the surface residual stress predicted by modified method is much closer to the FEM results than the results predicted by McDowell's model under the same simulation conditions.

  6. An improved real-time endovascular guidewire position simulation using shortest path algorithm.

    PubMed

    Qiu, Jianpeng; Qu, Zhiyi; Qiu, Haiquan; Zhang, Xiaomin

    2016-09-01

    In this study, we propose a new graph-theoretical method to simulate guidewire paths inside the carotid artery. The minimum energy guidewire path can be obtained by applying the shortest path algorithm, such as Dijkstra's algorithm for graphs, based on the principle of the minimal total energy. Compared to previous results, experiments of three phantoms were validated, revealing that the first and second phantoms overlap completely between simulated and real guidewires. In addition, 95 % of the third phantom overlaps completely, and the remaining 5 % closely coincides. The results demonstrate that our method achieves 87 and 80 % improvements for the first and third phantoms under the same conditions, respectively. Furthermore, 91 % improvements were obtained for the second phantom under the condition with reduced graph construction complexity. PMID:26467345

  7. [Orally administered polaprezinc significantly improves taste disorders in ovarian cancer patient undergoing chemotherapy].

    PubMed

    Nishijima, Shota; Yanase, Toru; Hata, Yuki; Tamura, Ryo; Tsuneki, Ikunosuke; Tamura, Masaki; Kurabayashi, Takumi

    2011-04-01

    The subject was a 75-year-old female who was receiving paclitaxel and carboplatin(TC)chemotherapy every other week after surgery for ovarian cancer. She greatly complained of taste disorders after four cycles(of every other week administration) of TC chemotherapy. To understand how the taste disorder was caused by chemotherapy objectively, taste examinations were conducted for the patient in our department. These examinations were conducted after receiving the informed consent from the patient. The authors conducted taste examinations for the patient using serum zinc measurement, tongue cell culture, electrogustometry, and filter paper disc tests(before and after starting chemotherapy), and found that her serum zinc level fell significantly after four cycles of chemotherapy. Orally disintegrating tablets of polaprezinc were then administered to the patient, after which the subjective symptom of taste disorder improved. Her serum zinc level increased, and the electrogustometric threshold rapidly fell(an improvement). The filter paper disc test showed some improvement, particularly in the glossopharyngeal nerve and the greater petrosal nerve field. PMID:21499007

  8. Significant Improvements in the Practice Patterns of Adult Related Donor Care in US Transplantation Centers.

    PubMed

    Anthias, Chloe; Shaw, Bronwen E; Kiefer, Deidre M; Liesveld, Jane L; Yared, Jean; Kamble, Rammurti T; D'Souza, Anita; Hematti, Peiman; Seftel, Matthew D; Norkin, Maxim; DeFilipp, Zachariah; Kasow, Kimberly A; Abidi, Muneer H; Savani, Bipin N; Shah, Nirali N; Anderlini, Paolo; Diaz, Miguel A; Malone, Adriana K; Halter, Joerg P; Lazarus, Hillard M; Logan, Brent R; Switzer, Galen E; Pulsipher, Michael A; Confer, Dennis L; O'Donnell, Paul V

    2016-03-01

    Recent investigations have found a higher incidence of adverse events associated with hematopoietic cell donation in related donors (RDs) who have morbidities that if present in an unrelated donor (UD) would preclude donation. In the UD setting, regulatory standards ensure independent assessment of donors, one of several crucial measures to safeguard donor health and safety. A survey conducted by the Center for International Blood and Marrow Transplant Research (CIBMTR) Donor Health and Safety Working Committee in 2007 reported a potential conflict of interest in >70% of US centers, where physicians had simultaneous responsibility for RDs and their recipients. Consequently, several international organizations have endeavored to improve practice through regulations and consensus recommendations. We hypothesized that the changes in the 2012 Foundation for the Accreditation of Cellular Therapy and the Joint Accreditation Committee-International Society for Cellular Therapy and European Society for Blood and Marrow Transplantation standards resulting from the CIBMTR study would have significantly impacted practice. Accordingly, we conducted a follow-up survey of US transplantation centers to assess practice changes since 2007, and to investigate additional areas where RD care was predicted to differ from UD care. A total of 73 centers (53%), performing 79% of RD transplantations in the United States, responded. Significant improvements were observed since the earlier survey; 62% centers now ensure separation of RD and recipient care (P < .0001). This study identifies several areas where RD management does not meet international donor care standards, however. Particular concerns include counseling and assessment of donors before HLA typing, with 61% centers first disclosing donor HLA results to an individual other than the donor, the use of unlicensed mobilization agents, and the absence of long-term donor follow-up. Recommendations for improvement are made. PMID

  9. An improved atmospheric correction algorithm for applying MERIS data to very turbid inland waters

    NASA Astrophysics Data System (ADS)

    Jaelani, Lalu Muhamad; Matsushita, Bunkei; Yang, Wei; Fukushima, Takehiko

    2015-07-01

    Atmospheric correction (AC) is a necessary process when quantitatively monitoring water quality parameters from satellite data. However, it is still a major challenge to carry out AC for turbid coastal and inland waters. In this study, we propose an improved AC algorithm named N-GWI (new standard Gordon and Wang's algorithms with an iterative process and a bio-optical model) for applying MERIS data to very turbid inland waters (i.e., waters with a water-leaving reflectance at 864.8 nm between 0.001 and 0.01). The N-GWI algorithm incorporates three improvements to avoid certain invalid assumptions that limit the applicability of the existing algorithms in very turbid inland waters. First, the N-GWI uses a fixed aerosol type (coastal aerosol) but permits aerosol concentration to vary at each pixel; this improvement omits a complicated requirement for aerosol model selection based only on satellite data. Second, it shifts the reference band from 670 nm to 754 nm to validate the assumption that the total absorption coefficient at the reference band can be replaced by that of pure water, and thus can avoid the uncorrected estimation of the total absorption coefficient at the reference band in very turbid waters. Third, the N-GWI generates a semi-analytical relationship instead of an empirical one for estimation of the spectral slope of particle backscattering. Our analysis showed that the N-GWI improved the accuracy of atmospheric correction in two very turbid Asian lakes (Lake Kasumigaura, Japan and Lake Dianchi, China), with a normalized mean absolute error (NMAE) of less than 22% for wavelengths longer than 620 nm. However, the N-GWI exhibited poor performance in moderately turbid waters (the NMAE values were larger than 83.6% in the four American coastal waters). The applicability of the N-GWI, which includes both advantages and limitations, was discussed.

  10. Improvement and Refinement of the GPS/MET Data Analysis Algorithm

    NASA Technical Reports Server (NTRS)

    Herman, Benjamin M.

    2003-01-01

    The GPS/MET project was a satellite-to-satellite active microwave atmospheric limb sounder using the Global Positioning System transmitters as signal sources. Despite its remarkable success, GPS/MET could not independently sense atmospheric water vapor and ozone. Additionally the GPS/MET data retrieval algorithm needs to be further improved and refined to enhance the retrieval accuracies in the lower tropospheric region and the upper stratospheric region. The objectives of this proposal were to address these 3 problem areas.

  11. A Peak Alignment Algorithm with Novel Improvements In Application to Electropherogram Analysis

    PubMed Central

    Karabiber, Fethullah

    2013-01-01

    Alignment of peaks in electropherograms or chromatograms obtained from experimental techniques such capillary electrophoresis remains a significant challenge. Accurate alignment is critical for accurate interpretation of various classes of nucleic acid analysis technologies, including conventional DNA sequencing and new RNA structure probing technologies. We have developed an automated alignment algorithm based on dynamic programming to align multiple-peak time-series data both globally and locally. This algorithm relies on a new peak similarity measure and other features such as time penalties, global constraints, and minimum-similarity scores and results in rapid, highly accurate comparisons of complex time-series datasets. As a demonstrative case study, the developed algorithm was applied to analysis of capillary electrophoresis data from a Selective 2′-Hydroxyl Acylation analyzed by Primer Extension (SHAPE) evaluation of RNA secondary structure. The algorithm yielded robust analysis of challenging SHAPE probing data. Experimental results show that the peak alignment algorithm corrects retention time variation efficiently due to the presence of fluorescent tags on fragments and differences in capillaries. The tools can be readily adapted for the analysis other biological datasets in which peak retention times vary. PMID:24131055

  12. Carfilzomib significantly improves the progression-free survival of high-risk patients in multiple myeloma

    PubMed Central

    Fonseca, Rafael; Siegel, David; Dimopoulos, Meletios A.; Špička, Ivan; Masszi, Tamás; Hájek, Roman; Rosiñol, Laura; Goranova-Marinova, Vesselina; Mihaylov, Georgi; Maisnar, Vladimír; Mateos, Maria-Victoria; Wang, Michael; Niesvizky, Ruben; Oriol, Albert; Jakubowiak, Andrzej; Minarik, Jiri; Palumbo, Antonio; Bensinger, William; Kukreti, Vishal; Ben-Yehuda, Dina; Stewart, A. Keith; Obreja, Mihaela; Moreau, Philippe

    2016-01-01

    The presence of certain high-risk cytogenetic abnormalities, such as translocations (4;14) and (14;16) and deletion (17p), are known to have a negative impact on survival in multiple myeloma (MM). The phase 3 study ASPIRE (N = 792) demonstrated that progression-free survival (PFS) was significantly improved with carfilzomib, lenalidomide, and dexamethasone (KRd), compared with lenalidomide and dexamethasone (Rd) in relapsed MM. This preplanned subgroup analysis of ASPIRE was conducted to evaluate KRd vs Rd by baseline cytogenetics according to fluorescence in situ hybridization. Of 417 patients with known cytogenetic risk status, 100 patients (24%) were categorized with high-risk cytogenetics (KRd, n = 48; Rd, n = 52) and 317 (76%) were categorized with standard-risk cytogenetics (KRd, n = 147; Rd, n = 170). For patients with high-risk cytogenetics, treatment with KRd resulted in a median PFS of 23.1 months, a 9-month improvement relative to treatment with Rd. For patients with standard-risk cytogenetics, treatment with KRd led to a 10-month improvement in median PFS vs Rd. The overall response rates for KRd vs Rd were 79.2% vs 59.6% (high-risk cytogenetics) and 91.2% vs 73.5% (standard-risk cytogenetics); approximately fivefold as many patients with high- or standard-risk cytogenetics achieved a complete response or better with KRd vs Rd (29.2% vs 5.8% and 38.1% vs 6.5%, respectively). KRd improved but did not abrogate the poor prognosis associated with high-risk cytogenetics. This regimen had a favorable benefit-risk profile in patients with relapsed MM, irrespective of cytogenetic risk status, and should be considered a standard of care in these patients. This trial was registered at www.clinicaltrials.gov as #NCT01080391. PMID:27439911

  13. Three-dimensional medical image reconstruction based on improved live wire segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yanfang; Jiang, Zhengang; He, Wei; Zhang, Yongsheng; Yang, Huamin

    2008-03-01

    Three-dimensional image reconstruction by volume rendering has two problems: time-consuming and low precision. During the diagnosis procedure, some detailed organ tissue is the interest to doctors, so the reconstructed two-dimensional images are pre-processed before three-dimensional reconstruction including disturbance removing and precise segmentation, to obtain Region-Of-Interest (ROI) based on which three-dimensional reconstruction carries through, that can decrease the complexity of time and space. By this, Live Wire segmentation algorithm model for medical image is improved to gain exact edge coordinate for the image segmentation with interior details by improved filling algorithm. Segmented images with object details only are regarded as input to realize volume rendering by ray casting tracking algorithm. Because the needless organs have been filtered, the disturbance on interested objects for doctors is reduced. Meanwhile, generally speaking, these needed organs left are less proportion in images. So it reduces data amount of volume rendering, and improves the speed of three-dimensional reconstruction.

  14. Intensity-Modulated Radiation Therapy Significantly Improves Acute Gastrointestinal Toxicity in Pancreatic and Ampullary Cancers

    SciTech Connect

    Yovino, Susannah; Poppe, Matthew; Jabbour, Salma; David, Vera; Garofalo, Michael; Pandya, Naimesh; Alexander, Richard; Hanna, Nader; Regine, William F.

    2011-01-01

    Purpose: Among patients with upper abdominal malignancies, intensity-modulated radiation therapy (IMRT) can improve dose distributions to critical dose-limiting structures near the target. Whether these improved dose distributions are associated with decreased toxicity when compared with conventional three-dimensional treatment remains a subject of investigation. Methods and Materials: 46 patients with pancreatic/ampullary cancer were treated with concurrent chemoradiation (CRT) using inverse-planned IMRT. All patients received CRT based on 5-fluorouracil in a schema similar to Radiation Therapy Oncology Group (RTOG) 97-04. Rates of acute gastrointestinal (GI) toxicity for this series of IMRT-treated patients were compared with those from RTOG 97-04, where all patients were treated with three-dimensional conformal techniques. Chi-square analysis was used to determine if there was a statistically different incidence in acute GI toxicity between these two groups of patients. Results: The overall incidence of Grade 3-4 acute GI toxicity was low in patients receiving IMRT-based CRT. When compared with patients who had three-dimensional treatment planning (RTOG 97-04), IMRT significantly reduced the incidence of Grade 3-4 nausea and vomiting (0% vs. 11%, p = 0.024) and diarrhea (3% vs. 18%, p = 0.017). There was no significant difference in the incidence of Grade 3-4 weight loss between the two groups of patients. Conclusions: IMRT is associated with a statistically significant decrease in acute upper and lower GI toxicity among patients treated with CRT for pancreatic/ampullary cancers. Future clinical trials plan to incorporate the use of IMRT, given that it remains a subject of active investigation.

  15. GRISOTTO: A greedy approach to improve combinatorial algorithms for motif discovery with prior knowledge

    PubMed Central

    2011-01-01

    Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505

  16. Improve the ranking algorithm of the GEO Discovery and Access Broker through resource accessibility assessment

    NASA Astrophysics Data System (ADS)

    Santoro, M.; Sorichetta, A.; Roglia, E.; Quaglia, A.; Craglia, M.; Nativi, S.

    2013-12-01

    The vision of the Global Earth Observation System of Systems (GEOSS) is the achievement of societal benefits through voluntary contribution and sharing of resources to better understand the relationships between the society and the environment where we live. To address complex issues in the field of geosciences a combined effort from many disciplines, ranging from physical to social sciences and including humanities, is required. The introduction of the Discovery and Access Broker (DAB) in the GEOSS Common Infrastructure (GCI) allowed to lower significantly the entry barriers for data users and producers, and thus to increase the order of magnitude of discoverable resources in the GCI, from hundreds of thousands to millions. This is a major step forward but from discovery to access, the road is still long! Either missing accessibility information in the metadata or broken links represent the major issue that prevents the real exploitation of the GCI resources. This is a remarkable problem for users attempting to exploit services and datasets obtained through a DAB query. This issue can be minimized providing the user with a ranked list of results that takes into account the real availability and accessibility of resources. We present in this work a methodology that overcomes the problem described above by improving the ranking algorithm, which is currently applied to the result set of a query to the DAB. The proposed methodology is based on the following steps: 1) Verify if information related to the accessibility of resources is described in the metadata provided by GEOSS contributors; 2) If accessibility information is provided, identify the type of resources (e.g. services, datasets) and produce modified and standardized accessibility information in a consistent manner; 3) Use standardized information to test accessibility and availability of resources using a probing approach; 4) Use the results returned in the ranking algorithm to assign the correct weight to

  17. Improved CICA algorithm used for single channel compound fault diagnosis of rolling bearings

    NASA Astrophysics Data System (ADS)

    Chen, Guohua; Qie, Longfei; Zhang, Aijun; Han, Jin

    2016-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to realize single channel compound fault diagnosis of bearings and improve the diagnosis accuracy, an improved CICA algorithm named constrained independent component analysis based on the energy method (E-CICA) is proposed. With the approach, the single channel vibration signal is firstly decomposed into several wavelet coefficients by discrete wavelet transform(DWT) method for the purpose of obtaining multichannel signals. Then the envelope signals of the reconstructed wavelet coefficients are selected as the input of E-CICA algorithm, which fulfills the requirements that the number of sensors is greater than or equal to that of the source signals and makes it more suitable to be processed by CICA strategy. The frequency energy ratio(ER) of each wavelet reconstructed signal to the total energy of the given synchronous signal is calculated, and then the synchronous signal with maximum ER value is set as the reference signal accordingly. By this way, the reference signal contains a priori knowledge of fault source signal and the influence on fault signal extraction accuracy which is caused by the initial phase angle and the duty ratio of the reference signal in the traditional CICA algorithm is avoided. Experimental results show that E-CICA algorithm can effectively separate out the outer-race defect and the rollers defect from the single channel compound fault and fulfill the needs of compound fault diagnosis of rolling bearings, and the running time is 0.12% of that of the traditional CICA algorithm and the extraction accuracy is 1.4 times of that of CICA as well. The proposed research provides a new method to separate single channel compound fault signals.

  18. An Improved PID Algorithm Based on Insulin-on-Board Estimate for Blood Glucose Control with Type 1 Diabetes

    PubMed Central

    Hu, Ruiqiang; Li, Chengwei

    2015-01-01

    Automated closed-loop insulin infusion therapy has been studied for many years. In closed-loop system, the control algorithm is the key technique of precise insulin infusion. The control algorithm needs to be designed and validated. In this paper, an improved PID algorithm based on insulin-on-board estimate is proposed and computer simulations are done using a combinational mathematical model of the dynamics of blood glucose-insulin regulation in the blood system. The simulation results demonstrate that the improved PID algorithm can perform well in different carbohydrate ingestion and different insulin sensitivity situations. Compared with the traditional PID algorithm, the control performance is improved obviously and hypoglycemia can be avoided. To verify the effectiveness of the proposed control algorithm, in silico testing is done using the UVa/Padova virtual patient software. PMID:26550021

  19. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  20. Improved Algorithms for Accurate Retrieval of UV - Visible Diffuse Attenuation Coefficients in Optically Complex, Inshore Waters

    NASA Technical Reports Server (NTRS)

    Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.

    2014-01-01

    Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This

  1. An Adaptive Displacement Estimation Algorithm for Improved Reconstruction of Thermal Strain

    PubMed Central

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M.; Tillman, Bryan; Leers, Steven A.; Kim, Kang

    2014-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas’ estimator and time-shift estimators like normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas’ estimator is limited by phase-wrapping and NXcorr performs poorly when the signal-to-noise ratio (SNR) is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas’ estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex-vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas’ estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas’ estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI using Field II showed that the adaptive displacement estimator was less biased than either Loupas’ estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7–350% and the spatial accuracy by 1.2–23.0% (p < 0.001). An ex-vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and results in improved strain reconstruction. PMID:25585398

  2. An improved electromagnetism-like mechanism algorithm and its application to the prediction of diabetes mellitus.

    PubMed

    Wang, Kung-Jeng; Adrian, Angelia Melani; Chen, Kun-Huang; Wang, Kung-Min

    2015-04-01

    Recently, the use of artificial intelligence based data mining techniques for massive medical data classification and diagnosis has gained its popularity, whereas the effectiveness and efficiency by feature selection is worthy to further investigate. In this paper, we presents a novel method for feature selection with the use of opposite sign test (OST) as a local search for the electromagnetism-like mechanism (EM) algorithm, denoted as improved electromagnetism-like mechanism (IEM) algorithm. Nearest neighbor algorithm is served as a classifier for the wrapper method. The proposed IEM algorithm is compared with nine popular feature selection and classification methods. Forty-six datasets from the UCI repository and eight gene expression microarray datasets are collected for comprehensive evaluation. Non-parametric statistical tests are conducted to justify the performance of the methods in terms of classification accuracy and Kappa index. The results confirm that the proposed IEM method is superior to the common state-of-art methods. Furthermore, we apply IEM to predict the occurrence of Type 2 diabetes mellitus (DM) after a gestational DM. Our research helps identify the risk factors for this disease; accordingly accurate diagnosis and prognosis can be achieved to reduce the morbidity and mortality rate caused by DM. PMID:25677947

  3. An algorithm to improve speech recognition in noise for hearing-impaired listeners.

    PubMed

    Healy, Eric W; Yoho, Sarah E; Wang, Yuxuan; Wang, DeLiang

    2013-10-01

    Despite considerable effort, monaural (single-microphone) algorithms capable of increasing the intelligibility of speech in noise have remained elusive. Successful development of such an algorithm is especially important for hearing-impaired (HI) listeners, given their particular difficulty in noisy backgrounds. In the current study, an algorithm based on binary masking was developed to separate speech from noise. Unlike the ideal binary mask, which requires prior knowledge of the premixed signals, the masks used to segregate speech from noise in the current study were estimated by training the algorithm on speech not used during testing. Sentences were mixed with speech-shaped noise and with babble at various signal-to-noise ratios (SNRs). Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions. These increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs. They were also often substantial, allowing several HI listeners to improve intelligibility from scores near zero to values above 70%. PMID:24116438

  4. An algorithm to improve speech recognition in noise for hearing-impaired listeners

    PubMed Central

    Healy, Eric W.; Yoho, Sarah E.; Wang, Yuxuan; Wang, DeLiang

    2013-01-01

    Despite considerable effort, monaural (single-microphone) algorithms capable of increasing the intelligibility of speech in noise have remained elusive. Successful development of such an algorithm is especially important for hearing-impaired (HI) listeners, given their particular difficulty in noisy backgrounds. In the current study, an algorithm based on binary masking was developed to separate speech from noise. Unlike the ideal binary mask, which requires prior knowledge of the premixed signals, the masks used to segregate speech from noise in the current study were estimated by training the algorithm on speech not used during testing. Sentences were mixed with speech-shaped noise and with babble at various signal-to-noise ratios (SNRs). Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions. These increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs. They were also often substantial, allowing several HI listeners to improve intelligibility from scores near zero to values above 70%. PMID:24116438

  5. Significantly improving electromagnetic performance of nanopaper and its shape-memory nanocomposite by aligned carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Lu, Haibao; Gou, Jan

    2012-04-01

    A new nanopaper that exhibits exciting electrical and electromagnetic performances is fabricated by incorporating magnetically aligned carbon nanotube (CNT) with carbon nanofibers (CNFs). Electromagnetic CNTs were blended with and aligned into the nanopaper using a magnetic field, to significantly improve the electrical and electromagnetic performances of nanopaper and its enabled shape-memory polymer (SMP) composite. The morphology and structure of the aligned CNT arrays in nanopaper were characterized with scanning electronic microscopy (SEM). A continuous and compact network of CNFs and aligned CNTs indicated that the nanopaper could have highly conductive properties. Furthermore, the electromagnetic interference (EMI) shielding efficiency of the SMP composites with different weight content of aligned CNT arrays was characterized. Finally, the aligned CNT arrays in nanopapers were employed to achieve the electrical actuation and accelerate the recovery speed of SMP composites.

  6. Novel analogues of the therapeutic complement inhibitor compstatin with significantly improved affinity and potency1

    PubMed Central

    Qu, Hongchang; Magotti, Paola; Ricklin, Daniel; Wu, Emilia L.; Kourtzelis, Ioannis; Wu, You-Qiang; Kaznessis, Yiannis N.; Lambris, John D.

    2010-01-01

    Compstatin is a 13-residue disulfide-bridged peptide that inhibits a key step in the activation of the human complement system. Compstatin and its derivatives have shown great promise for the treatment of many clinical disorders associated with unbalanced complement activity. To obtain more potent compstatin analogues, we have now performed an N-methylation scan of the peptide backbone and amino acid substitutions at position 13. One analogue (Ac-I[CVW(Me)QDW-Sar-AHRC](NMe)I-NH2) displayed a 1,000-fold increase in both potency (IC50=62 nM) and binding affinity for C3b (KD=2.3 nM) over that of the original compstatin. Biophysical analysis using surface plasmon resonance and isothermal titration calorimetry suggests that the improved binding originates from more favorable free conformation and stronger hydrophobic interactions. This study provides a series of significantly improved drug leads for therapeutic applications in complement-related diseases, and offers new insights into the structure-activity relationships of compstatin analogues. PMID:21067811

  7. Significantly improved cyclability of lithium manganese oxide under elevated temperature by an easily oxidized electrolyte additive

    NASA Astrophysics Data System (ADS)

    Zhu, Yunmin; Rong, Haibo; Mai, Shaowei; Luo, Xueyi; Li, Xiaoping; Li, Weishan

    2015-12-01

    Spinel lithium manganese oxide, LiMn2O4, is a promising cathode for lithium ion battery in large-scale applications, because it possesses many advantages compared with currently used layered lithium cobalt oxide (LiCoO2) and olivine phosphate (LiFePO4), including naturally abundant resource, environmental friendliness and high and long work potential plateau. Its poor cyclability under high temperature, however, limits its application. In this work, we report a significant cyclability improvement of LiMn2O4 under elevated temperature by using dimethyl phenylphonite (DMPP) as an electrolyte additive. Charge/discharge tests demonstrate that the application of 0.5 wt.% DMPP yields a capacity retention improvement from 16% to 82% for LiMn2O4 after 200 cycles under 55 °C at 1 C (1C = 148 mAh g-1) between 3 and 4.5 V. Electrochemical and physical characterizations indicate that DMPP is electrochemically oxidized at the potential lower than that for lithium extraction, forming a protective cathode interphase on LiMn2O4, which suppresses the electrolyte decomposition and prevents LiMn2O4 from crystal destruction.

  8. Recent processing string and fusion algorithm improvements for automated sea mine classification in shallow water

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernandez, Manuel F.; Dobeck, Gerald J.

    2003-09-01

    A novel sea mine computer-aided-detection / computer-aided-classification (CAD/CAC) processing string has been developed. The overall CAD/CAC processing string consists of pre-processing, adaptive clutter filtering (ACF), normalization, detection, feature extraction, feature orthogonalization, optimal subset feature selection, classification and fusion processing blocks. The range-dimension ACF is matched both to average highlight and shadow information, while also adaptively suppressing background clutter. For each detected object, features are extracted and processed through an orthogonalization transformation, enabling an efficient application of the optimal log-likelihood-ratio-test (LLRT) classification rule, in the orthogonal feature space domain. The classified objects of 4 distinct processing strings are fused using the classification confidence values as features and logic-based, "M-out-of-N", or LLRT-based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new shallow water high-resolution sonar imagery data. The processing string detection and classification parameters were tuned and the string classification performance was optimized, by appropriately selecting a subset of the original feature set. A significant improvement was made to the CAD/CAC processing string by utilizing a repeated application of the subset feature selection / LLRT classification blocks. It was shown that LLRT-based fusion algorithms outperform the logic based and the "M-out-of-N" ones. The LLRT-based fusion of the CAD/CAC processing strings resulted in up to a nine-fold false alarm rate reduction, compared to the best single CAD/CAC processing string results, while maintaining a constant correct mine classification probability.

  9. Use of genetic algorithms to improve the solid waste collection service in an urban area.

    PubMed

    Buenrostro-Delgado, Otoniel; Ortega-Rodriguez, Juan Manuel; Clemitshaw, Kevin C; González-Razo, Carlos; Hernández-Paniagua, Iván Y

    2015-07-01

    Increasing generation of Urban Solid Waste (USW) has become a significant issue in developing countries due to unprecedented population growth and high rates of urbanisation. This issue has exceeded current plans and programs of local governments to manage and dispose of USW. In this study, a Genetic Algorithm for Rule-set Production (GARP) integrated into a Geographic Information System (GIS) was used to find areas with socio-economic conditions that are representative of the generation of USW constituents in such areas. Socio-economic data of selected variables categorised by Basic Geostatistical Areas (BGAs) were taken from the 2000 National Population Census (NPC). USW and additional socio-economic data were collected during two survey campaigns in 1998 and 2004. Areas for sampling of USW were stratified into lower, middle and upper economic strata according to income. Data on USW constituents were analysed using descriptive statistics and Multivariate Analysis. ARC View 3.2 was used to convert the USW data and socio-economic variables to spatial data. Desk-top GARP software was run to generate a spatial model to identify areas with similar socio-economic conditions to those sampled. Results showed that socio-economic variables such as monthly income and education are positively correlated with waste constituents generated. The GARP used in this study revealed BGAs with similar socio-economic conditions to those sampled, where a similar composition of waste constituents generated is expected. Our results may be useful to decrease USW management costs by improving the collection services. PMID:25869842

  10. A diabetic retinopathy detection method using an improved pillar K-means algorithm.

    PubMed

    Gogula, Susmitha Valli; Divakar, Ch; Satyanarayana, Ch; Rao, Allam Appa

    2014-01-01

    The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage. PMID:24516323

  11. A diabetic retinopathy detection method using an improved pillar K-means algorithm

    PubMed Central

    Gogula, Susmitha valli; Divakar, CH; Satyanarayana, CH; Rao, Allam Appa

    2014-01-01

    The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage. PMID:24516323

  12. Improving lesion detectability in PET imaging with a penalized likelihood reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Wangerin, Kristen A.; Ahn, Sangtae; Ross, Steven G.; Kinahan, Paul E.; Manjeshwar, Ravindra M.

    2015-03-01

    Ordered Subset Expectation Maximization (OSEM) is currently the most widely used image reconstruction algorithm for clinical PET. However, OSEM does not necessarily provide optimal image quality, and a number of alternative algorithms have been explored. We have recently shown that a penalized likelihood image reconstruction algorithm using the relative difference penalty, block sequential regularized expectation maximization (BSREM), achieves more accurate lesion quantitation than OSEM, and importantly, maintains acceptable visual image quality in clinical wholebody PET. The goal of this work was to evaluate lesion detectability with BSREM versus OSEM. We performed a twoalternative forced choice study using 81 patient datasets with lesions of varying contrast inserted into the liver and lung. At matched imaging noise, BSREM and OSEM showed equivalent detectability in the lungs, and BSREM outperformed OSEM in the liver. These results suggest that BSREM provides not only improved quantitation and clinically acceptable visual image quality as previously shown but also improved lesion detectability compared to OSEM. We then modeled this detectability study, applying both nonprewhitening (NPW) and channelized Hotelling (CHO) model observers to the reconstructed images. The CHO model observer showed good agreement with the human observers, suggesting that we can apply this model to future studies with varying simulation and reconstruction parameters.

  13. Structural optimization of Pt-Pd alloy nanoparticles using an improved discrete particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Shao, Gui-Fang; Wang, Ting-Na; Liu, Tun-Dong; Chen, Jun-Ren; Zheng, Ji-Wen; Wen, Yu-Hua

    2015-01-01

    Pt-Pd alloy nanoparticles, as potential catalyst candidates for new-energy resources such as fuel cells and lithium ion batteries owing to their excellent reactivity and selectivity, have aroused growing attention in the past years. Since structure determines physical and chemical properties of nanoparticles, the development of a reliable method for searching the stable structures of Pt-Pd alloy nanoparticles has become of increasing importance to exploring the origination of their properties. In this article, we have employed the particle swarm optimization algorithm to investigate the stable structures of alloy nanoparticles with fixed shape and atomic proportion. An improved discrete particle swarm optimization algorithm has been proposed and the corresponding scheme has been presented. Subsequently, the swap operator and swap sequence have been applied to reduce the probability of premature convergence to the local optima. Furthermore, the parameters of the exchange probability and the 'particle' size have also been considered in this article. Finally, tetrahexahedral Pt-Pd alloy nanoparticles has been used to test the effectiveness of the proposed method. The calculated results verify that the improved particle swarm optimization algorithm has superior convergence and stability compared with the traditional one.

  14. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  15. Improved Progressive Polynomial Algorithm for Self-Adjustment and Optimal Response in Intelligent Sensors

    PubMed Central

    Rivera, José; Herrera, Gilberto; Chacón, Mario; Acosta, Pedro; Carrillo, Mariano

    2008-01-01

    The development of intelligent sensors involves the design of reconfigurable systems capable of working with different input sensors signals. Reconfigurable systems should expend the least possible amount of time readjusting. A self-adjustment algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity with good accuracy. This paper shows the performance of a progressive polynomial algorithm utilizing different grades of relative nonlinearity of an output sensor signal. It also presents an improvement to this algorithm which obtains an optimal response with minimum nonlinearity error, based on the number and selection sequence of the readjust points. In order to verify the potential of this proposed criterion, a temperature measurement system was designed. The system is based on a thermistor which presents one of the worst nonlinearity behaviors. The application of the proposed improved method in this system showed that an adequate sequence of the adjustment points yields to the minimum nonlinearity error. In realistic applications, by knowing the grade of relative nonlinearity of a sensor, the number of readjustment points can be determined using the proposed method in order to obtain the desired nonlinearity error. This will impact on readjustment methodologies and their associated factors like time and cost.

  16. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  17. Aerodynamic Improvements of an Empty Timber Truck can Have the Potential of Significantly Reducing Fuel Consumption

    NASA Astrophysics Data System (ADS)

    Andersson, Magnus; Marashi, Seyedeh Sepideh; Karlsson, Matts

    2012-11-01

    In the present study, aerodynamic drag (AD) has been estimated for an empty and a fully loaded conceptual timber truck (TT) using Computational Fluid Dynamics (CFD). The increasing fuel prices have challenged heavy duty vehicle (HDV) manufactures to strive for better fuel economy, by e.g. utilizing drag reducing external devices. Despite this knowledge, the TT fleets seem to be left in the dark. Like HDV aerodynamics, similarities can be observed as a large low pressure wake is formed behind the tractor (unloaded) and downstream of the trailer (full load) thus generating AD. As TTs travel half the time without any cargo, focus on drag reduction is important. The full scaled TTs where simulated using the realizable k-epsilon model with grid adaption techniques for mesh independence. Our results indicate that a loaded TT reduces the AD significantly as both wake size and turbulence kinetic energy are lowered. In contrast to HDV the unloaded TTs have a much larger design space available for possible drag reducing devices, e.g. plastic wrapping and/or flaps. This conceptual CFD study has given an indication of the large AD difference between the unloaded and fully loaded TT, showing the potential for significant AD improvements.

  18. Significant improvement in one-dimensional cursor control using Laplacian electroencephalography over electroencephalography

    NASA Astrophysics Data System (ADS)

    Boudria, Yacine; Feltane, Amal; Besio, Walter

    2014-06-01

    Objective. Brain-computer interfaces (BCIs) based on electroencephalography (EEG) have been shown to accurately detect mental activities, but the acquisition of high levels of control require extensive user training. Furthermore, EEG has low signal-to-noise ratio and low spatial resolution. The objective of the present study was to compare the accuracy between two types of BCIs during the first recording session. EEG and tripolar concentric ring electrode (TCRE) EEG (tEEG) brain signals were recorded and used to control one-dimensional cursor movements. Approach. Eight human subjects were asked to imagine either ‘left’ or ‘right’ hand movement during one recording session to control the computer cursor using TCRE and disc electrodes. Main results. The obtained results show a significant improvement in accuracies using TCREs (44%-100%) compared to disc electrodes (30%-86%). Significance. This study developed the first tEEG-based BCI system for real-time one-dimensional cursor movements and showed high accuracies with little training.

  19. SPOT HRVIR: A significant improvement of high resolution visible SPOT camera to i.r. wavelengths

    NASA Astrophysics Data System (ADS)

    Jouan, J.; Reulet, J. F.; Costes, G.

    Since March 1986, the SPOT-HRV cameras are imaging the Earth in a 10 m resolution panchromatic mode and three 20 m resolution spectral modes all in the visible and near infra-red wavelengths. The interest of medium i.r. imaging for observing vegetation has raised in the past and the implementation of a 1.6 μm band was decided by CNES for SPOT 4. New 3000 elements detector matrix had to be developed. This work is nearly completed now through a contract placed at Thomson by CNES. In parallel, the focal plane of the previous HRV cameras has been completely redesigned and the video electronics has also been significantly improved taking into account both some problems identified on SPOT 1 in orbit and the implementation of the MIR band. A significant change in integration and test is introduced by the operating temperature of the MIR detectors to be accurately controlled at 5°C and a calibration of the whole instrument in vacuum is foreseen. Following a presentation of HRV-SPOT 1 main flight results, technical and programmatical points of the SPOT 4 HRVIR situation is presented in the paper.

  20. Improved near-infrared ocean reflectance correction algorithm for satellite ocean color data processing.

    PubMed

    Jiang, Lide; Wang, Menghua

    2014-09-01

    A new approach for the near-infrared (NIR) ocean reflectance correction in atmospheric correction for satellite ocean color data processing in coastal and inland waters is proposed, which combines the advantages of the three existing NIR ocean reflectance correction algorithms, i.e., Bailey et al. (2010) [Opt. Express18, 7521 (2010)Appl. Opt.39, 897 (2000)Opt. Express20, 741 (2012)], and is named BMW. The normalized water-leaving radiance spectra nLw(λ) obtained from this new NIR-based atmospheric correction approach are evaluated against those obtained from the shortwave infrared (SWIR)-based atmospheric correction algorithm, as well as those from some existing NIR atmospheric correction algorithms based on several case studies. The scenes selected for case studies are obtained from two different satellite ocean color sensors, i.e., the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua and the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP), with an emphasis on several turbid water regions in the world. The new approach has shown to produce nLw(λ) spectra most consistent with the SWIR results among all NIR algorithms. Furthermore, validations against the in situ measurements also show that in less turbid water regions the new approach produces reasonable and similar results comparable to the current operational algorithm. In addition, by combining the new NIR atmospheric correction with the SWIR-based approach, the new NIR-SWIR atmospheric correction can produce further improved ocean color products. The new NIR atmospheric correction can be implemented in a global operational satellite ocean color data processing system. PMID:25321543

  1. Improved algorithm for processing grating-based phase contrast interferometry image sets

    SciTech Connect

    Marathe, Shashidhara Assoufid, Lahsen Xiao, Xianghui; Ham, Kyungmin; Johnson, Warren W.; Butler, Leslie G.

    2014-01-15

    Grating-based X-ray and neutron interferometry tomography using phase-stepping methods generates large data sets. An improved algorithm is presented for solving for the parameters to calculate transmissions, differential phase contrast, and dark-field images. The method takes advantage of the vectorization inherent in high-level languages such as Mathematica and MATLAB and can solve a 16 × 1k × 1k data set in less than a second. In addition, the algorithm can function with partial data sets. This is demonstrated with processing of a 16-step grating data set with partial use of the original data chosen without any restriction. Also, we have calculated the reduced chi-square for the fit and notice the effect of grating support structural elements upon the differential phase contrast image and have explored expanded basis set representations to mitigate the impact.

  2. Using frequency analysis to improve the precision of human body posture algorithms based on Kalman filters.

    PubMed

    Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G

    2016-05-01

    With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. PMID:26337122

  3. Branch-pipe-routing approach for ships using improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sui, Haiteng; Niu, Wentie

    2016-05-01

    Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.

  4. Use of a genetic algorithm to improve the rail profile on Stockholm underground

    NASA Astrophysics Data System (ADS)

    Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon

    2010-12-01

    In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.

  5. Improved algorithm for processing grating-based phase contrast interferometry image sets.

    PubMed

    Marathe, Shashidhara; Assoufid, Lahsen; Xiao, Xianghui; Ham, Kyungmin; Johnson, Warren W; Butler, Leslie G

    2014-01-01

    Grating-based X-ray and neutron interferometry tomography using phase-stepping methods generates large data sets. An improved algorithm is presented for solving for the parameters to calculate transmissions, differential phase contrast, and dark-field images. The method takes advantage of the vectorization inherent in high-level languages such as Mathematica and MATLAB and can solve a 16 × 1k × 1k data set in less than a second. In addition, the algorithm can function with partial data sets. This is demonstrated with processing of a 16-step grating data set with partial use of the original data chosen without any restriction. Also, we have calculated the reduced chi-square for the fit and notice the effect of grating support structural elements upon the differential phase contrast image and have explored expanded basis set representations to mitigate the impact. PMID:24517772

  6. Improvement of relief algorithm to prevent inpatient's downfall accident with night-vision CCD camera

    NASA Astrophysics Data System (ADS)

    Matsuda, Noriyuki; Yamamoto, Takeshi; Miwa, Masafumi; Nukumi, Shinobu; Mori, Kumiko; Kuinose, Yuko; Maeda, Etuko; Miura, Hirokazu; Taki, Hirokazu; Hori, Satoshi; Abe, Norihiro

    2005-12-01

    "ROSAI" hospital, Wakayama City in Japan, reported that inpatient's bed-downfall is one of the most serious accidents in hospital at night. Many inpatients have been having serious damages from downfall accidents from a bed. To prevent accidents, the hospital tested several sensors in a sickroom to send warning-signal of inpatient's downfall accidents to a nurse. However, it sent too much inadequate wrong warning about inpatients' sleeping situation. To send a nurse useful information, precise automatic detection for an inpatient's sleeping situation is necessary. In this paper, we focus on a clustering-algorithm which evaluates inpatient's situation from multiple angles by several kinds of sensor including night-vision CCD camera. This paper indicates new relief algorithm to improve the weakness about exceptional cases.

  7. Implementation and optimization of an improved morphological filtering algorithm for speckle removal based on DSPs

    NASA Astrophysics Data System (ADS)

    Liu, Qitao; Li, Yingchun; Sun, Huayan; Zhao, Yanzhong

    2008-03-01

    Laser active imaging system, which is of high resolution, anti-jamming and can be three-dimensional (3-D) imaging, has been used widely. But its imagery is usually affected by speckle noise which makes the grayscale of pixels change violently, hides the subtle details and makes the imaging resolution descend greatly. Removing speckle noise is one of the most difficult problems encountered in this system because of the poor statistical property of speckle. Based on the analysis of the statistical characteristic of speckle and morphological filtering algorithm, in this paper, an improved multistage morphological filtering algorithm is studied and implemented on TMS320C6416 DSP. The algorithm makes the morphological open-close and close-open transformation by using two different linear structure elements respectively, and then takes a weighted average over the above transformational results. The weighted coefficients are decided by the statistical characteristic of speckle. This algorithm is implemented on the TMS320C6416 DSPs after simulation on computer. The procedure of software design is fully presented. The methods are fully illustrated to achieve and optimize the algorithm in the research of the structural characteristic of TMS320C6416 DSP and feature of the algorithm. In order to fully benefit from such devices and increase the performance of the whole system, it is necessary to take a series of steps to optimize the DSP programs. This paper introduces some effective methods, including refining code structure, eliminating memory dependence, optimizing assembly code via linear assembly and so on, for TMS320C6x C language optimization and then offers the results of the application in a real-time implementation. The results of processing to the images blurred by speckle noise shows that the algorithm can not only effectively suppress speckle noise but also preserve the geometrical features of images. The results of the optimized code running on the DSP platform

  8. Using an Improved SIFT Algorithm and Fuzzy Closed-Loop Control Strategy for Object Recognition in Cluttered Scenes

    PubMed Central

    Nie, Haitao; Long, Kehui; Ma, Jun; Yue, Dan; Liu, Jinguo

    2015-01-01

    Partial occlusions, large pose variations, and extreme ambient illumination conditions generally cause the performance degradation of object recognition systems. Therefore, this paper presents a novel approach for fast and robust object recognition in cluttered scenes based on an improved scale invariant feature transform (SIFT) algorithm and a fuzzy closed-loop control method. First, a fast SIFT algorithm is proposed by classifying SIFT features into several clusters based on several attributes computed from the sub-orientation histogram (SOH), in the feature matching phase only features that share nearly the same corresponding attributes are compared. Second, a feature matching step is performed following a prioritized order based on the scale factor, which is calculated between the object image and the target object image, guaranteeing robust feature matching. Finally, a fuzzy closed-loop control strategy is applied to increase the accuracy of the object recognition and is essential for autonomous object manipulation process. Compared to the original SIFT algorithm for object recognition, the result of the proposed method shows that the number of SIFT features extracted from an object has a significant increase, and the computing speed of the object recognition processes increases by more than 40%. The experimental results confirmed that the proposed method performs effectively and accurately in cluttered scenes. PMID:25714094

  9. Significantly Improving Regional Seismic Amplitude Tomography at Higher Frequencies by Determining S -Wave Bandwidth

    DOE PAGESBeta

    Fisk, Mark D.; Pasyanos, Michael E.

    2016-05-03

    Characterizing regional seismic signals continues to be a difficult problem due to their variability. Calibration of these signals is very important to many aspects of monitoring underground nuclear explosions, including detecting seismic signals, discriminating explosions from earthquakes, and reliably estimating magnitude and yield. Amplitude tomography, which simultaneously inverts for source, propagation, and site effects, is a leading method of calibrating these signals. A major issue in amplitude tomography is the data quality of the input amplitude measurements. Pre-event and prephase signal-to-noise ratio (SNR) tests are typically used but can frequently include bad signals and exclude good signals. The deficiencies ofmore » SNR criteria, which are demonstrated here, lead to large calibration errors. To ameliorate these issues, we introduce a semi-automated approach to assess the bandwidth of a spectrum where it behaves physically. We determine the maximum frequency (denoted as Fmax) where it deviates from this behavior due to inflections at which noise or spurious signals start to bias the spectra away from the expected decay. We compare two amplitude tomography runs using the SNR and new Fmax criteria and show significant improvements to the stability and accuracy of the tomography output for frequency bands higher than 2 Hz by using our assessments of valid S-wave bandwidth. We compare Q estimates, P/S residuals, and some detailed results to explain the improvements. Lastly, for frequency bands higher than 4 Hz, needed for effective P/S discrimination of explosions from earthquakes, the new bandwidth criteria sufficiently fix the instabilities and errors so that the residuals and calibration terms are useful for application.« less

  10. Improving the Response of a Rollover Sensor Placed in a Car under Performance Tests by Using a RLS Lattice Algorithm

    PubMed Central

    Hernandez, Wilmar

    2005-01-01

    In this paper, a sensor to measure the rollover angle of a car under performance tests is presented. Basically, the sensor consists of a dual-axis accelerometer, analog-electronic instrumentation stages, a data acquisition system and an adaptive filter based on a recursive least-squares (RLS) lattice algorithm. In short, the adaptive filter is used to improve the performance of the rollover sensor by carrying out an optimal prediction of the relevant signal coming from the sensor, which is buried in a broad-band noise background where we have little knowledge of the noise characteristics. The experimental results are satisfactory and show a significant improvement in the signal-to-noise ratio at the system output.

  11. Algorithms for the reconstruction of the singular wave front of laser radiation: analysis and improvement of accuracy

    SciTech Connect

    Aksenov, V P; Kanev, F Yu; Izmailov, I V; Starikov, F A

    2008-07-31

    The possibility of reconstructing a singular wave front of laser beams by the local tilts of the wave front measured with a Hartmann sensor is considered. The accuracy of the reconstruction algorithm described by Fried is estimated and its modification is proposed, which allows one to improve the reliability of the phase reconstruction. Based on the Fried algorithm and its modification, a combined algorithm is constructed whose advantages are demonstrated in numerical experiments. (control of laser radiation parameters)

  12. Recent improvements in efficiency, accuracy, and convergence for implicit approximate factorization algorithms. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Steger, J. L.

    1985-01-01

    In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.

  13. Improved bowel preparation increases polyp detection and unmasks significant polyp miss rate

    PubMed Central

    Papanikolaou, Ioannis S; Sioulas, Athanasios D; Magdalinos, Nektarios; Beintaris, Iosif; Lazaridis, Lazaros-Dimitrios; Polymeros, Dimitrios; Malli, Chrysoula; Dimitriadis, George D; Triantafyllou, Konstantinos

    2015-01-01

    AIM: To retrospectively compare previous-day vs split-dose preparation in terms of bowel cleanliness and polyp detection in patients referred for polypectomy. METHODS: Fifty patients underwent two colonoscopies: one diagnostic in a private clinic and a second for polypectomy in a University Hospital. The latter procedures were performed within 12 wk of the index ones. Examinations were accomplished by two experienced endoscopists, different in each facility. Twenty-seven patients underwent screening/surveillance colonoscopy, while the rest were symptomatic. Previous day bowel preparation was utilized initially and split-dose for polypectomy. Colon cleansing was evaluated using the Aronchick scale. We measured the number of detected polyps, and the polyp miss rates per-polyp. RESULTS: Excellent/good preparation was reported in 38 cases with previous-day preparation (76%) vs 46 with split-dose (92%), respectively (P = 0.03). One hundred and twenty-six polyps were detected initially and 169 subsequently (P < 0.0001); 88 vs 126 polyps were diminutive (P < 0.0001), 25 vs 29 small (P = 0.048) and 13 vs 14 equal or larger than 10 mm. The miss rates for total, diminutive, small and large polyps were 25.4%, 30.1%, 13.7% and 6.6%, respectively. Multivariate analysis revealed that split-dose preparation was significantly associated (OR, P) with increased number of polyps detected overall (0.869, P < 0.001), in the right (0.418, P = 0.008) and in the left colon (0.452, P = 0.02). CONCLUSION: Split-dose preparation improved colon cleansing, enhanced polyp detection and unmasked significant polyp miss rates. PMID:26488024

  14. Flavonol-rich dark cocoa significantly decreases plasma endothelin-1 and improves cognition in urban children

    PubMed Central

    Calderón-Garcidueñas, Lilian; Mora-Tiscareño, Antonieta; Franco-Lira, Maricela; Cross, Janet V.; Engle, Randall; Aragón-Flores, Mariana; Gómez-Garza, Gilberto; Jewells, Valerie; Weili, Lin; Medina-Cortina, Humberto; Solorio, Edelmira; Chao, Chih-kai; Zhu, Hongtu; Mukherjee, Partha S.; Ferreira-Azevedo, Lara; Torres-Jardón, Ricardo; D'Angiulli, Amedeo

    2013-01-01

    Air pollution exposures are linked to systemic inflammation, cardiovascular and respiratory morbidity and mortality, neuroinflammation and neuropathology in young urbanites. In particular, most Mexico City Metropolitan Area (MCMA) children exhibit subtle cognitive deficits, and neuropathology studies show 40% of them exhibiting frontal tau hyperphosphorylation and 51% amyloid-β diffuse plaques (compared to 0% in low pollution control children). We assessed whether a short cocoa intervention can be effective in decreasing plasma endothelin 1 (ET-1) and/or inflammatory mediators in MCMA children. Thirty gram of dark cocoa with 680 mg of total flavonols were given daily for 10.11 ± 3.4 days (range 9–24 days) to 18 children (10.55 years, SD = 1.45; 11F/7M). Key metabolite ratios in frontal white matter and in hippocampus pre and during cocoa intervention were quantified by magnetic resonance spectroscopy. ET-1 significantly decreased after cocoa treatment (p = 0.0002). Fifteen children (83%) showed a marginally significant individual improvement in one or both of the applied simple short memory tasks. Endothelial dysfunction is a key feature of exposure to particulate matter (PM) and decreased endothelin-1 bioavailability is likely useful for brain function in the context of air pollution. Our findings suggest that cocoa interventions may be critical for early implementation of neuroprotection of highly exposed urban children. Multi-domain nutraceutical interventions could limit the risk for endothelial dysfunction, cerebral hypoperfusion, neuroinflammation, cognitive deficits, structural volumetric detrimental brain effects, and the early development of the neuropathological hallmarks of Alzheimer's and Parkinson's diseases. PMID:23986703

  15. Flavonol-rich dark cocoa significantly decreases plasma endothelin-1 and improves cognition in urban children.

    PubMed

    Calderón-Garcidueñas, Lilian; Mora-Tiscareño, Antonieta; Franco-Lira, Maricela; Cross, Janet V; Engle, Randall; Aragón-Flores, Mariana; Gómez-Garza, Gilberto; Jewells, Valerie; Medina-Cortina, Humberto; Solorio, Edelmira; Chao, Chih-Kai; Zhu, Hongtu; Mukherjee, Partha S; Ferreira-Azevedo, Lara; Torres-Jardón, Ricardo; D'Angiulli, Amedeo

    2013-01-01

    Air pollution exposures are linked to systemic inflammation, cardiovascular and respiratory morbidity and mortality, neuroinflammation and neuropathology in young urbanites. In particular, most Mexico City Metropolitan Area (MCMA) children exhibit subtle cognitive deficits, and neuropathology studies show 40% of them exhibiting frontal tau hyperphosphorylation and 51% amyloid-β diffuse plaques (compared to 0% in low pollution control children). We assessed whether a short cocoa intervention can be effective in decreasing plasma endothelin 1 (ET-1) and/or inflammatory mediators in MCMA children. Thirty gram of dark cocoa with 680 mg of total flavonols were given daily for 10.11 ± 3.4 days (range 9-24 days) to 18 children (10.55 years, SD = 1.45; 11F/7M). Key metabolite ratios in frontal white matter and in hippocampus pre and during cocoa intervention were quantified by magnetic resonance spectroscopy. ET-1 significantly decreased after cocoa treatment (p = 0.0002). Fifteen children (83%) showed a marginally significant individual improvement in one or both of the applied simple short memory tasks. Endothelial dysfunction is a key feature of exposure to particulate matter (PM) and decreased endothelin-1 bioavailability is likely useful for brain function in the context of air pollution. Our findings suggest that cocoa interventions may be critical for early implementation of neuroprotection of highly exposed urban children. Multi-domain nutraceutical interventions could limit the risk for endothelial dysfunction, cerebral hypoperfusion, neuroinflammation, cognitive deficits, structural volumetric detrimental brain effects, and the early development of the neuropathological hallmarks of Alzheimer's and Parkinson's diseases. PMID:23986703

  16. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications

    PubMed Central

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation. PMID:26958441

  17. Improved Lower Bounds of DNA Tags Based on a Modified Genetic Algorithm

    PubMed Central

    Wang, Bin; Wei, Xiaopeng; Dong, Jing; Zhang, Qiang

    2015-01-01

    The well-known massively parallel sequencing method is efficient and it can obtain sequence data from multiple individual samples. In order to ensure that sequencing, replication, and oligonucleotide synthesis errors do not result in tags (or barcodes) that are unrecoverable or confused, the tag sequences should be abundant and sufficiently different. Recently, many design methods have been proposed for correcting errors in data using error-correcting codes. The existing tag sets contain small tag sequences, so we used a modified genetic algorithm to improve the lower bound of the tag sets in this study. Compared with previous research, our algorithm is effective for designing sets of DNA tags. Moreover, the GC content determined by existing methods includes an imprecise range. Thus, we improved the GC content determination method to obtain tag sets that control the GC content in a more precise range. Finally, previous studies have only considered perfect self-complementarity. Thus, we considered the crossover between different tags and introduced an improved constraint into the design of tag sets. PMID:25693135

  18. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications.

    PubMed

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation. PMID:26958441

  19. Development of significantly improved catalysts for coal liquefaction and upgrading of coal extracts

    SciTech Connect

    Sinha, V.T.; Kutzenco, P.D.; Preston, W.J.; Brinen, J.S.; Graham, S.W.; Butensky, M.; Muchnick, T.L.; Hyman, D.

    1982-01-01

    During 1979-80, a new generation of very active, long-lived catlaysts for hydrotrating was discovered at the Stamford Research Laboratories of the American Cyanamid Company. The catalysts are based on a unique substrate prepared in bead form from a rehydratable alumina. Their spherical shape, crush strength, and abrasion resistance seem ideally suited for the ebullated bed reactors used in the H-COAL process developed by Hydrocarbon Research, Inc. (HRI). The beads have internal pore structures that are controllable over a wider range than conventional alumina supports, leading to active catalysts that are resistant to poisoning. In September, 1981, the Department of Energy granted a 3-year contract to the American Cyanamid Company for the development of significantly improved catalysts for coal liquefaction anf for upgrading coal extracts, particularly in reactors used in the H-COAL process. Catalysts will first be screened in a batch reactor to identify promising candidated. The latter will be tested in a continuous aging reactor to evaluate their resistance to deactivation under col-liquefaction and extract upgrading conditions. Cold flow ebullation tests of catalysts of different head size are presented, along with some screening and crying of experimental catalysts. Spherical methods for examining catalysts are described.

  20. Activation of Big Grain1 significantly improves grain size by regulating auxin transport in rice

    PubMed Central

    Liu, Linchuan; Tong, Hongning; Xiao, Yunhua; Che, Ronghui; Xu, Fan; Hu, Bin; Liang, Chengzhen; Chu, Jinfang; Li, Jiayang; Chu, Chengcai

    2015-01-01

    Grain size is one of the key factors determining grain yield. However, it remains largely unknown how grain size is regulated by developmental signals. Here, we report the identification and characterization of a dominant mutant big grain1 (Bg1-D) that shows an extra-large grain phenotype from our rice T-DNA insertion population. Overexpression of BG1 leads to significantly increased grain size, and the severe lines exhibit obviously perturbed gravitropism. In addition, the mutant has increased sensitivities to both auxin and N-1-naphthylphthalamic acid, an auxin transport inhibitor, whereas knockdown of BG1 results in decreased sensitivities and smaller grains. Moreover, BG1 is specifically induced by auxin treatment, preferentially expresses in the vascular tissue of culms and young panicles, and encodes a novel membrane-localized protein, strongly suggesting its role in regulating auxin transport. Consistent with this finding, the mutant has increased auxin basipetal transport and altered auxin distribution, whereas the knockdown plants have decreased auxin transport. Manipulation of BG1 in both rice and Arabidopsis can enhance plant biomass, seed weight, and yield. Taking these data together, we identify a novel positive regulator of auxin response and transport in a crop plant and demonstrate its role in regulating grain size, thus illuminating a new strategy to improve plant productivity. PMID:26283354

  1. Significant effect of Ca2+ on improving the heat resistance of lactic acid bacteria.

    PubMed

    Huang, Song; Chen, Xiao Dong

    2013-07-01

    The heat resistance of lactic acid bacteria (LAB) has been extensively investigated due to its highly practical significance. Reconstituted skim milk (RSM) has been found to be one of the most effective protectant wall materials for microencapsulating microorganisms during convective drying, such as spray drying. In addition to proteins and carbohydrate, RSM is rich in calcium. It is not clear which component is critical in the RSM protection mechanism. This study investigated the independent effect of calcium. Ca(2+) was added to lactose solution to examine its influence on the heat resistance of Lactobacillus rhamnosus ZY, Lactobacillus casei Zhang, Lactobacillus plantarum P8 and Streptococcus thermophilus ND03. The results showed that certain Ca(2+) concentrations enhanced the heat resistance of the LAB strains to different extents, that is produced higher survival and shorter regrowth lag times of the bacterial cells. In some cases, the improvements were dramatic. More scientifically insightful and more intensive instrumental study of the Ca(2+) behavior around and in the cells should be carried out in the near future. In the meantime, this work may lead to the development of more cost-effective wall materials with Ca(2+) added as a prime factor. PMID:23617813

  2. Activation of Big Grain1 significantly improves grain size by regulating auxin transport in rice.

    PubMed

    Liu, Linchuan; Tong, Hongning; Xiao, Yunhua; Che, Ronghui; Xu, Fan; Hu, Bin; Liang, Chengzhen; Chu, Jinfang; Li, Jiayang; Chu, Chengcai

    2015-09-01

    Grain size is one of the key factors determining grain yield. However, it remains largely unknown how grain size is regulated by developmental signals. Here, we report the identification and characterization of a dominant mutant big grain1 (Bg1-D) that shows an extra-large grain phenotype from our rice T-DNA insertion population. Overexpression of BG1 leads to significantly increased grain size, and the severe lines exhibit obviously perturbed gravitropism. In addition, the mutant has increased sensitivities to both auxin and N-1-naphthylphthalamic acid, an auxin transport inhibitor, whereas knockdown of BG1 results in decreased sensitivities and smaller grains. Moreover, BG1 is specifically induced by auxin treatment, preferentially expresses in the vascular tissue of culms and young panicles, and encodes a novel membrane-localized protein, strongly suggesting its role in regulating auxin transport. Consistent with this finding, the mutant has increased auxin basipetal transport and altered auxin distribution, whereas the knockdown plants have decreased auxin transport. Manipulation of BG1 in both rice and Arabidopsis can enhance plant biomass, seed weight, and yield. Taking these data together, we identify a novel positive regulator of auxin response and transport in a crop plant and demonstrate its role in regulating grain size, thus illuminating a new strategy to improve plant productivity. PMID:26283354

  3. An extended bioreaction database that significantly improves reconstruction and analysis of genome-scale metabolic networks.

    PubMed

    Stelzer, Michael; Sun, Jibin; Kamphans, Tom; Fekete, Sándor P; Zeng, An-Ping

    2011-11-01

    The bioreaction database established by Ma and Zeng (Bioinformatics, 2003, 19, 270-277) for in silico reconstruction of genome-scale metabolic networks has been widely used. Based on more recent information in the reference databases KEGG LIGAND and Brenda, we upgrade the bioreaction database in this work by almost doubling the number of reactions from 3565 to 6851. Over 70% of the reactions have been manually updated/revised in terms of reversibility, reactant pairs, currency metabolites and error correction. For the first time, 41 spontaneous sugar mutarotation reactions are introduced into the biochemical database. The upgrade significantly improves the reconstruction of genome scale metabolic networks. Many gaps or missing biochemical links can be recovered, as exemplified with three model organisms Homo sapiens, Aspergillus niger, and Escherichia coli. The topological parameters of the constructed networks were also largely affected, however, the overall network structure remains scale-free. Furthermore, we consider the problem of computing biologically feasible shortest paths in reconstructed metabolic networks. We show that these paths are hard to compute and present solutions to find such paths in networks of small and medium size. PMID:21952610

  4. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    PubMed Central

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  5. A combined approach to cartographic displacement for buildings based on skeleton and improved elastic beam algorithm.

    PubMed

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  6. An improved phase shift reconstruction algorithm of fringe scanning technique for X-ray microscopy

    SciTech Connect

    Lian, S.; Yang, H.; Kudo, H.; Momose, A.; Yashiro, W.

    2015-02-15

    The X-ray phase imaging method has been applied to observe soft biological tissues, and it is possible to image the soft tissues by using the benefit of the so-called “Talbot effect” by an X-ray grating. One type of the X-ray phase imaging method was reported by combining an X-ray imaging microscope equipped by a Fresnel zone plate with a phase grating. Using the fringe scanning technique, a high-precision phase shift image could be obtained by displacing the grating step by step and measuring dozens of sample images. The number of the images was selected to reduce the error caused by the non-sinusoidal component of the Talbot self-image at the imaging plane. A larger number suppressed the error more but increased radiation exposure and required higher mechanical stability of equipment. In this paper, we analyze the approximation error of fringe scanning technique for the X-ray microscopy which uses just one grating and proposes an improved algorithm. We compute the approximation error by iteration and substitute that into the process of reconstruction of phase shift. This procedure will suppress the error even with few sample images. The results of simulation experiments show that the precision of phase shift image reconstructed by the proposed algorithm with 4 sample images is almost the same as that reconstructed by the conventional algorithm with 40 sample images. We also have succeeded in the experiment with real data.

  7. MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Hao; Li, Na; Xu, Shiyou; Chen, Zengping

    2014-10-01

    Migration through resolution cells (MTRC) is generated in high-resolution inverse synthetic aperture radar (ISAR) imaging. A MTRC compensation algorithm for high-resolution ISAR imaging based on improved polar format algorithm (PFA) is proposed in this paper. Firstly, in the situation that a rigid-body target stably flies, the initial value of the rotation angle and center of the target is obtained from the rotation of radar line of sight (RLOS) and high range resolution profile (HRRP). Then, the PFA is iteratively applied to the echo data to search the optimization solution based on minimum entropy criterion. The procedure starts with the estimated initial rotation angle and center, and terminated when the entropy of the compensated ISAR image is minimized. To reduce the computational load, the 2-D iterative search is divided into two 1-D search. One is carried along the rotation angle and the other one is carried along rotation center. Each of the 1-D searches is realized by using of the golden section search method. The accurate rotation angle and center can be obtained when the iterative search terminates. Finally, apply the PFA to compensate the MTRC by the use of the obtained optimized rotation angle and center. After MTRC compensation, the ISAR image can be best focused. Simulated and real data demonstrate the effectiveness and robustness of the proposed algorithm.

  8. Evaluating some computer enhancement algorithms that improve the visibility of cometary morphology

    NASA Technical Reports Server (NTRS)

    Larson, S. M.; Slaughter, C. D.

    1991-01-01

    The observed morphology of cometary comae is determined by ejection circumstances and the interaction of the ejected material with the local environment. Anisotropic emission can provide useful information on such things as orientation of the nucleus, location of active areas on the nucleus, and the formation of ion structure near the nucleus. However, discrete coma features are usually diffuse, of low amplitude, and superimposed on a steep intensity gradient radial to the nucleus. To improve the visibility of these features, a variety of digital enhancement algorithms were employed with varying degrees of success. They usually produce some degree of spatial filtering, and are chosen to optimize visibility of certain detail. Since information in the image is altered, it is important to understand the effects of parameter selection and processing artifacts can have on subsequent interpretation. Using the criteria that the ideal algorithm must enhance low contrast features while not introducing misleading artifacts (or features that cannot be seen in the stretched, unprocessed image), the suitability of various algorithms that aid cometary studies were assessed. The strong and weak points of each are identified in the context of maintaining positional integrity of features at the expense of photometric information.

  9. Effective application of improved profit-mining algorithm for the interday trading model.

    PubMed

    Hsieh, Yu-Lung; Yang, Don-Lin; Wu, Jungpin

    2014-01-01

    Many real world applications of association rule mining from large databases help users make better decisions. However, they do not work well in financial markets at this time. In addition to a high profit, an investor also looks for a low risk trading with a better rate of winning. The traditional approach of using minimum confidence and support thresholds needs to be changed. Based on an interday model of trading, we proposed effective profit-mining algorithms which provide investors with profit rules including information about profit, risk, and winning rate. Since profit-mining in the financial market is still in its infant stage, it is important to detail the inner working of mining algorithms and illustrate the best way to apply them. In this paper we go into details of our improved profit-mining algorithm and showcase effective applications with experiments using real world trading data. The results show that our approach is practical and effective with good performance for various datasets. PMID:24688442

  10. Effective Application of Improved Profit-Mining Algorithm for the Interday Trading Model

    PubMed Central

    Wu, Jungpin

    2014-01-01

    Many real world applications of association rule mining from large databases help users make better decisions. However, they do not work well in financial markets at this time. In addition to a high profit, an investor also looks for a low risk trading with a better rate of winning. The traditional approach of using minimum confidence and support thresholds needs to be changed. Based on an interday model of trading, we proposed effective profit-mining algorithms which provide investors with profit rules including information about profit, risk, and winning rate. Since profit-mining in the financial market is still in its infant stage, it is important to detail the inner working of mining algorithms and illustrate the best way to apply them. In this paper we go into details of our improved profit-mining algorithm and showcase effective applications with experiments using real world trading data. The results show that our approach is practical and effective with good performance for various datasets. PMID:24688442

  11. Evaluation of an improved algorithm for producing realistic 3D breast software phantoms: Application for mammography

    SciTech Connect

    Bliznakova, K.; Suryanarayanan, S.; Karellas, A.; Pallikarakis, N.

    2010-11-15

    Purpose: This work presents an improved algorithm for the generation of 3D breast software phantoms and its evaluation for mammography. Methods: The improved methodology has evolved from a previously presented 3D noncompressed breast modeling method used for the creation of breast models of different size, shape, and composition. The breast phantom is composed of breast surface, duct system and terminal ductal lobular units, Cooper's ligaments, lymphatic and blood vessel systems, pectoral muscle, skin, 3D mammographic background texture, and breast abnormalities. The key improvement is the development of a new algorithm for 3D mammographic texture generation. Simulated images of the enhanced 3D breast model without lesions were produced by simulating mammographic image acquisition and were evaluated subjectively and quantitatively. For evaluation purposes, a database with regions of interest taken from simulated and real mammograms was created. Four experienced radiologists participated in a visual subjective evaluation trial, as they judged the quality of the simulated mammograms, using the new algorithm compared to mammograms, obtained with the old modeling approach. In addition, extensive quantitative evaluation included power spectral analysis and calculation of fractal dimension, skewness, and kurtosis of simulated and real mammograms from the database. Results: The results from the subjective evaluation strongly suggest that the new methodology for mammographic breast texture creates improved breast models compared to the old approach. Calculated parameters on simulated images such as {beta} exponent deducted from the power law spectral analysis and fractal dimension are similar to those calculated on real mammograms. The results for the kurtosis and skewness are also in good coincidence with those calculated from clinical images. Comparison with similar calculations published in the literature showed good agreement in the majority of cases. Conclusions: The

  12. Forward-Masked Frequency Selectivity Improvements in Simulated and Actual Cochlear Implant Users Using a Preprocessing Algorithm.

    PubMed

    Langner, Florian; Jürgens, Tim

    2016-01-01

    Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users' speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception. PMID:27604785

  13. A new algorithm to improve assessment of cortical bone geometry in pQCT.

    PubMed

    Cervinka, Tomas; Sievänen, Harri; Lala, Deena; Cheung, Angela M; Giangregorio, Lora; Hyttinen, Jari

    2015-12-01

    High-resolution peripheral quantitative computed tomography (HR-pQCT) is now considered the leading imaging modality in bone research. However, access to HR-pQCT is limited and image acquisition is mainly constrained only for the distal third of appendicular bones. Hence, the conventional pQCT is still commonly used despite inaccurate threshold-based segmentation of cortical bone that can compromise the assessment of whole bone strength. Therefore, this study addressed whether the use of an advanced image processing algorithm, called OBS, can enhance the cortical bone analysis in pQCT images and provide similar information to HR-pQCT when the same volumes of interest are analyzed. Using pQCT images of European Forearm Phantom (EFP), and pQCT and HR-pQCT images of the distal tibia from 15 cadavers, we compared the results from the OBS algorithm with those obtained from common pQCT analyses, HR-pQCT manual analysis (considered as a gold standard) and common HR-pQCT analysis dual threshold technique.We found that the use of OBS segmentation method for pQCT image analysis of EFP data did not result in any improvement but reached similar performance in cortical bone delineation as did HR-pQCT image analyses. The assessments of cortical cross-sectional bone area and thickness by OBS algorithm were overestimated by less than 4% while area moments of inertia were overestimated by ~5–10%, depending on reference HR-pQCT analysis method. In conclusion, this study showed that the OBS algorithm performed reasonably well and it offers a promising practical tool to enhance the assessment of cortical bone geometry in pQCT. PMID:26428659

  14. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage.

    PubMed

    Lee, Kyuman; Baek, Hoki; Lim, Jaesung

    2016-01-01

    The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user

  15. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  16. Registration of the Cone Beam CT and Blue-Ray Scanned Dental Model Based on the Improved ICP Algorithm.

    PubMed

    Mei, Xue; Li, Zhenhua; Xu, Songsong; Guo, Xiaoyan

    2014-01-01

    Multimodality image registration and fusion has complementary significance for guiding dental implant surgery. As the needs of the different resolution image registration, we develop an improved Iterative Closest Point (ICP) algorithm that focuses on the registration of Cone Beam Computed Tomography (CT) image and high-resolution Blue-light scanner image. The proposed algorithm includes two major phases, coarse and precise registration. Firstly, for reducing the matching interference of human subjective factors, we extract feature points based on curvature characteristics and use the improved three point's translational transformation method to realize coarse registration. Then, the feature point set and reference point set, obtained by the initial registered transformation, are processed in the precise registration step. Even with the unsatisfactory initial values, this two steps registration method can guarantee the global convergence and the convergence precision. Experimental results demonstrate that the method has successfully realized the registration of the Cone Beam CT dental model and the blue-ray scanner model with higher accuracy. So the method could provide researching foundation for the relevant software development in terms of the registration of multi-modality medical data. PMID:24511309

  17. Using gas modifiers to significantly improve sensitivity and selectivity in a cylindrical FAIMS device.

    PubMed

    Purves, Randy W; Ozog, Allison R; Ambrose, Stephen J; Prasad, Satendra; Belford, Michael; Dunyach, Jean-Jacques

    2014-07-01

    Recent reports describing enhanced performance when using gas additives in a DMS device (planar electrodes) have indicated that comparable benefits are not attainable using FAIMS (cylindrical electrodes), owing to the non-homogeneous electric fields within the analyzer region. In this study, a FAIMS system (having cylindrical electrodes) was modified to allow for controlled delivery of gas additives. An experiment was carried out that illustrates the important distinction between gas modifiers present as unregulated contaminants and modifiers added in a controlled manner. The effect of contamination was simulated by adjusting the ESI needle position to promote incomplete desolvation, thereby permitting ESI solvent vapor into the FAIMS analyzer region, causing signal instability and irreproducible CV values. However, by actively controlling the delivery of the gas modifier, reproducible CV spectra were obtained. The effects of adding different gas modifiers were examined using 15 positive ions having mass-to-charge (m/z) values between 90 and 734. Significant improvements in peak capacity and increases in ion transmission were readily attained by adding acetonitrile vapor, even at trace levels (≤0.1%). Increases in signal intensity were greatest for the low m/z ions; for the six lowest molecular weight species, signal intensities increased by ∼10- to over 100-fold compared with using nitrogen without gas additives, resulting in equivalent or better signal intensities compared with ESI without FAIMS. These results confirm that analytical benefits derived from the addition of gas modifiers reported with a uniform electric field (DMS) also are observed using a non-homogenous electric field (FAIMS) in the analyser region. PMID:24796261

  18. Significant contribution of realistic vegetation representation to improved simulation and prediction of climate anomalies over land

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Doblas-Reyes, Francisco; van den Hurk, Bart; Miller, Paul

    2015-04-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation through the coupling with the LPJ-Guess model. In its original formulation, the coupling between atmosphere and vegetation variability is simply operated by the vegetation Leaf Area Index (LAI), which affects climate by only changing the vegetation physiological resistance to evapotranspiration. This coupling with no implied change of the vegetation fractional coverage has been reported to have a weak effect on the surface climate modeled by EC-Earth (e.g.: also Weiss et al. 2012). The effective sub-grid vegetation fractional coverage can vary seasonally and at interannual time-scales as a function of leaf-canopy growth, phenology and senescence, and therefore affect biophysical parameters such as the surface roughness, albedo and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation densitiy to the LAI, based on a Lambert-Beer formulation. By comparing historical 20th century simulations and retrospective forecasts performed applying the new effective fractional-coverage parameterization with the respective reference simulations using the original constant vegetation-fraction, we showed an increased effect of vegetation on the EC-Earth surface climate. The analysis shows considerable sensitivity of EC-Earth surface climate at seasonal to interannual time-scales due to the variability of vegetation effective fractional coverage. Particularly large effects are shown over boreal winter middle-to-high latitudes, where the cooling effect of the new parameterization corrects the warm biases of the control simulations over land. For boreal winter, the realistic representation of vegetation variability leads to a significant improvement of the skill in predicting surface climate over land at seasonal time-scales. A potential predictability experiment extended to longer time-scales also indicates the

  19. An optimized procedure for plant recovery from somatic embryos significantly facilitates the genetic improvement of Vitis.

    PubMed

    Li, Zhijian T; Kim, Kyung-Hee; Dhekney, Sadanand A; Jasinski, Jonathan R; Creech, Matthew R; Gray, Dennis J

    2014-01-01

    Plant regeneration from grapevine (Vitis spp.) via somatic embryogenesis typically is poor. Recovery of plants from Vitis rotundifolia Michx. (muscadine grape) is particularly problematic due to extremely low efficiency, including extended culture durations required for embryo-plant conversion. Poor plant recovery is an obstacle to the selection of improved genetically modified lines. Somatic embryos (SEs) of V. rotundifolia cultivar Delicious (Del-HS) and Vitis vinifera L cultivar Thompson Seedless (TS) were used to identify culture media and conditions that promoted embryo differentiation and plant conversion; this resulted in a two-step culture system. In comparative culture experiments, C2D medium containing 6% sucrose was the most effective, among four distinct formulae tested, for inducing precocious SE germination and cell differentiation. This medium, further supplemented with 4 µM 6-benzylaminopurine (C2D4B), was subsequently determined to enhance post-germinative growth of SE. MS medium supplemented with 0.5 µM 1-naphthaleneacetic acid (MSN) was then utilized to stimulate root and shoot growth of germinated SE. An average of 35% and 80% 'Del-HS' and 'TS' SE, respectively, developed into plants. All plants developed robust root and shoot systems and exhibited excellent survival following transfer to soil. Over 150 plants of 'Del-HS' were regenerated and established within 2.5 months, which is a dramatic reduction from the 6- to 12-month time period previously required. Similarly, 88 'TS' plant lines were obtained within the same time period. Subsequently, seven out of eight Vitis cultivars exhibited significantly increased plant conversion percentages, demonstrating broad application of the two-step culture system to produce the large numbers of independent plant lines needed for selection of desired traits. PMID:26504540

  20. An optimized procedure for plant recovery from somatic embryos significantly facilitates the genetic improvement of Vitis

    PubMed Central

    Li, Zhijian T; Kim, Kyung-Hee; Dhekney, Sadanand A; Jasinski, Jonathan R; Creech, Matthew R; Gray, Dennis J

    2014-01-01

    Plant regeneration from grapevine (Vitis spp.) via somatic embryogenesis typically is poor. Recovery of plants from Vitis rotundifolia Michx. (muscadine grape) is particularly problematic due to extremely low efficiency, including extended culture durations required for embryo–plant conversion. Poor plant recovery is an obstacle to the selection of improved genetically modified lines. Somatic embryos (SEs) of V. rotundifolia cultivar Delicious (Del-HS) and Vitis vinifera L cultivar Thompson Seedless (TS) were used to identify culture media and conditions that promoted embryo differentiation and plant conversion; this resulted in a two-step culture system. In comparative culture experiments, C2D medium containing 6% sucrose was the most effective, among four distinct formulae tested, for inducing precocious SE germination and cell differentiation. This medium, further supplemented with 4 µM 6-benzylaminopurine (C2D4B), was subsequently determined to enhance post-germinative growth of SE. MS medium supplemented with 0.5 µM 1-naphthaleneacetic acid (MSN) was then utilized to stimulate root and shoot growth of germinated SE. An average of 35% and 80% ‘Del-HS’ and ‘TS’ SE, respectively, developed into plants. All plants developed robust root and shoot systems and exhibited excellent survival following transfer to soil. Over 150 plants of ‘Del-HS’ were regenerated and established within 2.5 months, which is a dramatic reduction from the 6- to 12-month time period previously required. Similarly, 88 ‘TS’ plant lines were obtained within the same time period. Subsequently, seven out of eight Vitis cultivars exhibited significantly increased plant conversion percentages, demonstrating broad application of the two-step culture system to produce the large numbers of independent plant lines needed for selection of desired traits. PMID:26504540

  1. An improved approximation algorithm for scaffold filling to maximize the common adjacencies.

    PubMed

    Liu, Nan; Jiang, Haitao; Zhu, Daming; Zhu, Binhai

    2013-01-01

    Scaffold filling is a new combinatorial optimization problem in genome sequencing. The one-sided scaffold filling problem can be described as given an incomplete genome I and a complete (reference) genome G, fill the missing genes into I such that the number of common (string) adjacencies between the resulting genome I' and G is maximized. This problem is NP-complete for genome with duplicated genes and the best known approximation factor is 1.33, which uses a greedy strategy. In this paper, we prove a better lower bound of the optimal solution, and devise a new algorithm by exploiting the maximum matching method and a local improvement technique, which improves the approximation factor to 1.25. For genome with gene repetitions, this is the only known NP-complete problem which admits an approximation with a small constant factor (less than 1.5). PMID:24334385

  2. Improved Temperature Sounding and Quality Control Methodology Using AIRS/AMSU Data: The AIRS Science Team Version 5 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John M.; Iredell, Lena; Keita, Fricky

    2009-01-01

    This paper describes the AIRS Science Team Version 5 retrieval algorithm in terms of its three most significant improvements over the methodology used in the AIRS Science Team Version 4 retrieval algorithm. Improved physics in Version 5 allows for use of AIRS clear column radiances in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of clear column radiances .R(sub i) for all channels. This new approach allows for the generation of more accurate values of .R(sub i) and T(p) under most cloud conditions. Secondly, Version 5 contains a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 also contains for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology, referred to as AIRS Version 5 AO, was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Results are shown comparing the relative performance of the AIRS Version 4, Version 5, and Version 5 AO for the single day, January 25, 2003. The Goddard DISC is now generating and distributing products derived using the AIRS Science Team Version 5 retrieval algorithm. This paper also described the Quality Control flags contained in the DISC AIRS/AMSU retrieval products and their intended use for scientific research purposes.

  3. Using trend templates in a neonatal seizure algorithm improves detection of short seizures in a foetal ovine model.

    PubMed

    Zwanenburg, Alex; Andriessen, Peter; Jellema, Reint K; Niemarkt, Hendrik J; Wolfs, Tim G A M; Kramer, Boris W; Delhaas, Tammo

    2015-03-01

    Seizures below one minute in duration are difficult to assess correctly using seizure detection algorithms. We aimed to improve neonatal detection algorithm performance for short seizures through the use of trend templates for seizure onset and end. Bipolar EEG were recorded within a transiently asphyxiated ovine model at 0.7 gestational age, a common experimental model for studying brain development in humans of 30-34 weeks of gestation. Transient asphyxia led to electrographic seizures within 6-8 h. A total of 3159 seizures, 2386 shorter than one minute, were annotated in 1976 h-long EEG recordings from 17 foetal lambs. To capture EEG characteristics, five features, sensitive to seizures, were calculated and used to derive trend information. Feature values and trend information were used as input for support vector machine classification and subsequently post-processed. Performance metrics, calculated after post-processing, were compared between analyses with and without employing trend information. Detector performance was assessed after five-fold cross-validation conducted ten times with random splits. The use of trend templates for seizure onset and end in a neonatal seizure detection algorithm significantly improves the correct detection of short seizures using two-channel EEG recordings from 54.3% (52.6-56.1) to 59.5% (58.5-59.9) at FDR 2.0 (median (range); p < 0.001, Wilcoxon signed rank test). Using trend templates might therefore aid in detection of short seizures by EEG monitoring at the NICU. PMID:25651839

  4. Acute Myocardial Infarction Complicated by Cardiogenic Shock: An Algorithm-Based Extracorporeal Membrane Oxygenation Program Can Improve Clinical Outcomes.

    PubMed

    Unai, Shinya; Tanaka, Daizo; Ruggiero, Nicholas; Hirose, Hitoshi; Cavarocchi, Nicholas C

    2016-03-01

    Extracorporeal membrane oxygenation (ECMO) in our institution resulted in near total mortality prior to the establishment of an algorithm-based program in July 2010. We hypothesized that an algorithm-based ECMO program improves the outcome of patients with acute myocardial infarction complicated with cardiogenic shock. Between March 2003 and July 2013, 29 patients underwent emergent catheterization for acute myocardial infarction due to left main or proximal left anterior descending artery occlusion complicated with cardiogenic shock (defined as systolic blood pressure <90 mm Hg despite multiple inotropes, with or without intra-aortic balloon pump, lactic acidosis). Of 29 patients, 15 patients were treated before July 2010 (Group 1, old program), and 14 patients were treated after July 2010 (Group 2, new program). There were no significant differences in the baseline characteristics, including age, sex, coronary risk factors, and left ventricular ejection fraction between the two groups. Cardiopulmonary resuscitation prior to ECMO was performed in two cases (13%) in Group 1 and four cases (29%) in Group 2. ECMO support was performed in one case (6.7%) in Group 1 and six cases (43%) in Group 2. The 30-day survival of Group 1 versus Group 2 was 40 versus 79% (P = 0.03), and 1-year survival rate was 20 versus 56% (P = 0.01). The survival rate for patients who underwent ECMO was 0% in Group 1 versus 83% in Group 2 (P = 0.09). In Group 2, the mean duration on ECMO was 9.8 ± 5.9 days. Of the six patients who required ECMO in Group 2, 100% were successfully weaned off ECMO or were bridged to ventricular assist device implantation. Initiation of an algorithm-based ECMO program improved the outcomes in patients with acute myocardial infarction complicated by cardiogenic shock. PMID:26148217

  5. Three-Dimensional Path Planning and Guidance of Leg Vascular Based on Improved Ant Colony Algorithm in Augmented Reality.

    PubMed

    Gao, Ming-ke; Chen, Yi-min; Liu, Quan; Huang, Chen; Li, Ze-yu; Zhang, Dian-hua

    2015-11-01

    Preoperative path planning plays a critical role in vascular access surgery. Vascular access surgery has superior difficulties and requires long training periods as well as precise operation. Yet doctors are on different leves, thus bulky size of blood vessels is usually chosen to undergo surgery and other possible optimal path is not considered. Moreover, patients and surgeons will suffer from X-ray radiation during the surgical procedure. The study proposed an improved ant colony algorithm to plan a vascular optimal three-dimensional path with overall consideration of factors such as catheter diameter, vascular length, diameter as well as the curvature and torsion. To protect the doctor and patient from exposing to X-ray long-term, the paper adopted augmented reality technology to register the reconstructed vascular model and physical model meanwhile, locate catheter by the electromagnetic tracking system and used Head Mounted Display to show the planning path in real time and monitor catheter push procedure. The experiment manifests reasonableness of preoperative path planning and proves the reliability of the algorithm. The augmented reality experiment real time and accurately displays the vascular phantom model, planning path and the catheter trajectory and proves the feasibility of this method. The paper presented a useful and feasible surgical scheme which was based on the improved ant colony algorithm to plan vascular three-dimensional path in augmented reality. The study possessed practical guiding significance in preoperative path planning, intraoperative catheter guiding and surgical training, which provided a theoretical method of path planning for vascular access surgery. It was a safe and reliable path planning approach and possessed practical reference value. PMID:26319273

  6. MODIS calibration algorithm improvements developed for Collection 6 Level-1B

    NASA Astrophysics Data System (ADS)

    Wenny, Brian N.; Sun, Junqiang; Xiong, Xiaoxiong; Wu, Aisheng; Chen, Hongda; Angal, Amit; Choi, Taeyoung; Chen, Na; Madhavan, Sriharsha; Geng, Xu; Kuyper, James; Tan, Liqin

    2010-09-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) has been operating on both the Terra and Aqua spacecraft for over 10.5 and 8 years, respectively. Over 40 science products are generated routinely from MODIS Earth images and used extensively by the global science community for a wide variety of land, ocean, and atmosphere applications. Over the mission lifetime, several versions of the MODIS data set have been in use as the calibration and data processing algorithms evolved. Currently Version 5 MODIS data is the baseline Level-1B calibrated science product. The MODIS Characterization Support Team (MCST), with input from the MODIS Science Team, developed and delivered a number of improvements and enhancements to the calibration algorithms, Level-1B processing code and Look-up Tables for the Version 6 Level-1B MODIS data. Version 6 implements a number of changes in the calibration methodology for both the Reflective Solar Bands (RSB) and Thermal Emissive Bands (TEB). This paper describes the improvements introduced in Collection 6 to the RSB and TEB calibration and detector Quality Assurance (QA) handling.

  7. Improvements to the Percolator algorithm for peptide identification from shotgun proteomics data sets

    PubMed Central

    Spivak, Marina; Weston, Jason; Bottou, Léon; Käll, Lukas; Noble, William Stafford

    2009-01-01

    Shotgun proteomics coupled with database search software allows the identification of a large number of peptides in a single experiment. However, some existing search algorithms, such as SEQUEST, use score functions that are designed primarily to identify the best peptide for a given spectrum. Consequently, when comparing identifications across spectra, the SEQUEST score function Xcorr fails to discriminate accurately between correct and incorrect peptide identifications. Several machine learning methods have been proposed to address the resulting classification task of distinguishing between correct and incorrect peptide-spectrum matches (PSMs). A recent example is Percolator, which uses semi-supervised learning and a decoy database search strategy to learn to distinguish between correct and incorrect PSMs identified by a database search algorithm. The current work describes three improvements to Percolator. (1) Percolator’s heuristic optimization is replaced with a clear objective function, with intuitive reasons behind its choice. (2) Tractable nonlinear models are used instead of linear models, leading to improved accuracy over the original Percolator. (3) A method, Q-ranker, for directly optimizing the number of identified spectra at a specified q value is proposed, which achieves further gains. PMID:19385687

  8. An improved optimization algorithm and Bayes factor termination criterion for sequential projection pursuit

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Jarman, Kristin H.; Harvey, Scott D.; Posse, Christian; Wright, Bob W.

    2005-05-28

    A fundamental problem in analysis of highly multivariate spectral or chromatographic data is reduction of dimensionality. Principal components analysis (PCA), concerned with explaining the variance-covariance structure of the data, is a commonly used approach to dimension reduction. Recently an attractive alternative to PCA, sequential projection pursuit (SPP), has been introduced. Designed to elicit clustering tendencies in the data, SPP may be more appropriate when performing clustering or classification analysis. However, the existing genetic algorithm (GA) implementation of SPP has two shortcomings, computation time and inability to determine the number of factors necessary to explain the majority of the structure in the data. We address both these shortcomings. First, we introduce a new SPP algorithm, a random scan sampling algorithm (RSSA), that significantly reduces computation time. We compare the computational burden of the RSS and GA implementation for SPP on a dataset containing Raman spectra of twelve organic compounds. Second, we propose a Bayes factor criterion, BFC, as an effective measure for selecting the number of factors needed to explain the majority of the structure in the data. We compare SPP to PCA on two datasets varying in type, size, and difficulty; in both cases SPP achieves a higher accuracy with a lower number of latent variables.

  9. Waste Minimization Improvements Achieved Through Six Sigma Analysis Result In Significant Cost Savings

    SciTech Connect

    Mousseau, Jeffrey, D.; Jansen, John, R.; Janke, David, H.; Plowman, Catherine, M.

    2003-02-26

    Improved waste minimization practices at the Department of Energy's (DOE) Idaho National Engineering and Environmental Laboratory (INEEL) are leading to a 15% reduction in the generation of hazardous and radioactive waste. Bechtel, BWXT Idaho, LLC (BBWI), the prime management and operations contractor at the INEEL, applied the Six Sigma improvement process to the INEEL Waste Minimization Program to review existing processes and define opportunities for improvement. Our Six Sigma analysis team: composed of an executive champion, process owner, a black belt and yellow belt, and technical and business team members used this statistical based process approach to analyze work processes and produced ten recommendations for improvement. Recommendations ranged from waste generator financial accountability for newly generated waste to enhanced employee recognition programs for waste minimization efforts. These improvements have now been implemented to reduce waste generation rates and are producing positive results.

  10. All-digital demodulation system of interferometric fiber optic sensors using an improved PGC algorithm based on fundamental frequency mixing

    NASA Astrophysics Data System (ADS)

    Zhang, Ai-ling; Wang, Kai-han; Zhang, Shuai; Wang, Yan

    2015-05-01

    We present an all-digital demodulation system of interferometric fiber optic sensor based on an improved arctangent-differential-self-multiplying (arctan-DSM) algorithm. The total harmonic distortion (THD) and the light intensity disturbance (LID) are also suppressed, the same as those in the traditional arctan-DSM algorithm. Moreover, the lowest sampling frequency is also reduced by introducing anti-aliasing filter, so the occupation of the system memory is reduced. The simulations show that the improved algorithm can correctly demodulate cosine signal and chirp signal with lower sampling frequency.

  11. Contrast improvement of continuous wave diffuse optical tomography reconstruction by hybrid approach using least square and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Patra, Rusha; Dutta, Pranab K.

    2015-07-01

    Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10-3, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.

  12. Residual Elimination Algorithm Enhancements to Improve Foot Motion Tracking During Forward Dynamic Simulations of Gait.

    PubMed

    Jackson, Jennifer N; Hass, Chris J; Fregly, Benjamin J

    2015-11-01

    Patient-specific gait optimizations capable of predicting post-treatment changes in joint motions and loads could improve treatment design for gait-related disorders. To maximize potential clinical utility, such optimizations should utilize full-body three-dimensional patient-specific musculoskeletal models, generate dynamically consistent gait motions that reproduce pretreatment marker measurements closely, and achieve accurate foot motion tracking to permit deformable foot-ground contact modeling. This study enhances an existing residual elimination algorithm (REA) Remy, C. D., and Thelen, D. G., 2009, “Optimal Estimation of Dynamically Consistent Kinematics and Kinetics for Forward Dynamic Simulation of Gait,” ASME J. Biomech. Eng., 131(3), p. 031005) to achieve all three requirements within a single gait optimization framework. We investigated four primary enhancements to the original REA: (1) manual modification of tracked marker weights, (2) automatic modification of tracked joint acceleration curves, (3) automatic modification of algorithm feedback gains, and (4) automatic calibration of model joint and inertial parameter values. We evaluated the enhanced REA using a full-body three-dimensional dynamic skeletal model and movement data collected from a subject who performed four distinct gait patterns: walking, marching, running, and bounding. When all four enhancements were implemented together, the enhanced REA achieved dynamic consistency with lower marker tracking errors for all segments, especially the feet (mean root-mean-square (RMS) errors of 3.1 versus 18.4 mm), compared to the original REA. When the enhancements were implemented separately and in combinations, the most important one was automatic modification of tracked joint acceleration curves, while the least important enhancement was automatic modification of algorithm feedback gains. The enhanced REA provides a framework for future gait optimization studies that seek to predict subject

  13. A Review of New and Developing Technology to Significantly Improve Mars Sample-Return Missions

    NASA Technical Reports Server (NTRS)

    Carsey, F.; Brophy, J.; Gilmore, M.; Rodgers, D.; Wilcox, B.

    2000-01-01

    A JPL development activity was initiated in FY 1999 for the purpose of examining and evaluating technologies that could materially improve future (i.e., beyond the 2005 launch) Mars sample return missions. The scope of the technology review was comprehensive and end-to-end; the goal was to improve mass, cost, risk, and scientific return. A specific objective was to assess approaches to sample return with only one Earth launch. While the objective of the study was specifically for sample-return, in-situ missions can also benefit from using many of the technologies examined.

  14. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    PubMed Central

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  15. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    PubMed

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  16. An improved hybrid encoding cuckoo search algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Jia, Ke; He, Yichao

    2014-01-01

    Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions. PMID:24527026

  17. An improved model of charge transfer inefficiency and correction algorithm for the Hubble Space Telescope

    NASA Astrophysics Data System (ADS)

    Massey, Richard; Schrabback, Tim; Cordes, Oliver; Marggraf, Ole; Israel, Holger; Miller, Lance; Hall, David; Cropper, Mark; Prod'homme, Thibaut; Niemi, Sami-Matias

    2014-03-01

    Charge-coupled device (CCD) detectors, widely used to obtain digital imaging, can be damaged by high energy radiation. Degraded images appear blurred, because of an effect known as Charge Transfer Inefficiency (CTI), which trails bright objects as the image is read out. It is often possible to correct most of the trailing during post-processing, by moving flux back to where it belongs. We compare several popular algorithms for this: quantifying the effect of their physical assumptions and tradeoffs between speed and accuracy. We combine their best elements to construct a more accurate model of damaged CCDs in the Hubble Space Telescope's Advanced Camera for Surveys/Wide Field Channel, and update it using data up to early 2013. Our algorithm now corrects 98 per cent of CTI trailing in science exposures, a substantial improvement over previous work. Further progress will be fundamentally limited by the presence of read noise. Read noise is added after charge transfer so does not get trailed - but it is incorrectly untrailed during post-processing.

  18. [Improved euler algorithm for trend forecast model and its application to oil spectrum analysis].

    PubMed

    Zheng, Chang-song; Ma, Biao

    2009-04-01

    The oil atomic spectrometric analysis technology is one of the most important methods for fault diagnosis and state monitoring of large machine equipment. The gray method is preponderant in the trend forecast at the same time. With the use of oil atomic spectrometric analysis result and combining the gray forecast theory, the present paper established a gray forecast model of the Fe/Cu concentration trend in the power-shift steering transmission. Aiming at the shortage of the gray method used in the trend forecast, the improved Euler algorithm was put forward for the first time to resolve the problem of the gray model and avoid the non-precision that the old gray model's forecast value depends on the first test value. This new method can make the forecast value more precision as shown in the example. Combined with the threshold value of the oil atomic spectrometric analysis, the new method was applied on the Fe/Cu concentration forecast and the premonition of fault information was obtained. So we can take steps to prevent the fault and this algorithm can be popularized to the state monitoring in the industry. PMID:19626907

  19. A registration algorithm of improved correlation coefficient for image of rotation and scaling

    NASA Astrophysics Data System (ADS)

    Wei, Chun-tao; Hu, Tao; Yuan, Kai-min

    2015-12-01

    In stereo vision technology, image matching is one of the most important parts, and the coefficient of correlation matching is recognized to be more mature and stable matching algorithm. Correlation coefficient method has high sensitivity to image rotation, but do not have rotation invariance, and require a large computational complexity. Because of this it cannot be widely applied in the field of real-time image matching. This paper is aimed at this drawback to make its computational complexity greatly reduced, posses the scale and rotation invariance, so as to meet the requirements of real-time image matching system, this paper proposes a image registration algorithm of accurate registration combined with Fourier-Mellin transform and Radon transform of image. After the introduction of Fourier transform and correlation coefficient method to detect the correct rotation factor and scale factor, it is provided a reliable basis for correlation coefficient method of image registration to achieve both rotation and scaling invariance, image using this method is verified by the experiments on the feasibility of the registration, the registration accuracy is improved.

  20. A chemically reactive spinning dope for significant improvements in wet spun carbon nanotube fibres.

    PubMed

    González-Domínguez, Jose M; Neri, Wilfrid; Maugey, Maryse; Poulin, Philippe; Ansón-Casaos, Alejandro; Martínez, M Teresa

    2013-05-11

    Single-walled carbon nanotubes can be spun in a polyvinyl alcohol stream to produce nanocomposite fibres. We use a facile ester linking between both elements to create improved fibres which exhibit outstanding enhancements in the absence of post-processing stages, providing a promising alternative based on a chemical method. PMID:23471091

  1. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  2. Gene selection approach based on improved swarm intelligent optimisation algorithm for tumour classification.

    PubMed

    Jin, Cong; Jin, Shu-Wei

    2016-06-01

    A number of different gene selection approaches based on gene expression profiles (GEP) have been developed for tumour classification. A gene selection approach selects the most informative genes from the whole gene space, which is an important process for tumour classification using GEP. This study presents an improved swarm intelligent optimisation algorithm to select genes for maintaining the diversity of the population. The most essential characteristic of the proposed approach is that it can automatically determine the number of the selected genes. On the basis of the gene selection, the authors construct a variety of the tumour classifiers, including the ensemble classifiers. Four gene datasets are used to evaluate the performance of the proposed approach. The experimental results confirm that the proposed classifiers for tumour classification are indeed effective. PMID:27187989

  3. A Novel Space Partitioning Algorithm to Improve Current Practices in Facility Placement

    PubMed Central

    Jimenez, Tamara; Mikler, Armin R; Tiwari, Chetan

    2012-01-01

    In the presence of naturally occurring and man-made public health threats, the feasibility of regional bio-emergency contingency plans plays a crucial role in the mitigation of such emergencies. While the analysis of in-place response scenarios provides a measure of quality for a given plan, it involves human judgment to identify improvements in plans that are otherwise likely to fail. Since resource constraints and government mandates limit the availability of service provided in case of an emergency, computational techniques can determine optimal locations for providing emergency response assuming that the uniform distribution of demand across homogeneous resources will yield and optimal service outcome. This paper presents an algorithm that recursively partitions the geographic space into sub-regions while equally distributing the population across the partitions. For this method, we have proven the existence of an upper bound on the deviation from the optimal population size for sub-regions. PMID:23853502

  4. Exponential H ∞ Synchronization of Chaotic Cryptosystems Using an Improved Genetic Algorithm

    PubMed Central

    Hsiao, Feng-Hsiag

    2015-01-01

    This paper presents a systematic design methodology for neural-network- (NN-) based secure communications in multiple time-delay chaotic (MTDC) systems with optimal H ∞ performance and cryptography. On the basis of the Improved Genetic Algorithm (IGA), which is demonstrated to have better performance than that of a traditional GA, a model-based fuzzy controller is then synthesized to stabilize the MTDC systems. A fuzzy controller is synthesized to not only realize the exponential synchronization, but also achieve optimal H ∞ performance by minimizing the disturbance attenuation level. Furthermore, the error of the recovered message is stated by using the n-shift cipher and key. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach. PMID:26366432

  5. Exponential H ∞ Synchronization of Chaotic Cryptosystems Using an Improved Genetic Algorithm.

    PubMed

    Hsiao, Feng-Hsiag

    2015-01-01

    This paper presents a systematic design methodology for neural-network- (NN-) based secure communications in multiple time-delay chaotic (MTDC) systems with optimal H ∞ performance and cryptography. On the basis of the Improved Genetic Algorithm (IGA), which is demonstrated to have better performance than that of a traditional GA, a model-based fuzzy controller is then synthesized to stabilize the MTDC systems. A fuzzy controller is synthesized to not only realize the exponential synchronization, but also achieve optimal H ∞ performance by minimizing the disturbance attenuation level. Furthermore, the error of the recovered message is stated by using the n-shift cipher and key. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach. PMID:26366432

  6. Improvement for detection of microcalcifications through clustering algorithms and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Quintanilla-Domínguez, Joel; Ojeda-Magaña, Benjamín; Marcano-Cedeño, Alexis; Cortina-Januchs, María G.; Vega-Corona, Antonio; Andina, Diego

    2011-12-01

    A new method for detecting microcalcifications in regions of interest (ROIs) extracted from digitized mammograms is proposed. The top-hat transform is a technique based on mathematical morphology operations and, in this paper, is used to perform contrast enhancement of the mi-crocalcifications. To improve microcalcification detection, a novel image sub-segmentation approach based on the possibilistic fuzzy c-means algorithm is used. From the original ROIs, window-based features, such as the mean and standard deviation, were extracted; these features were used as an input vector in a classifier. The classifier is based on an artificial neural network to identify patterns belonging to microcalcifications and healthy tissue. Our results show that the proposed method is a good alternative for automatically detecting microcalcifications, because this stage is an important part of early breast cancer detection.

  7. [An improved N-FINDR endmember extraction algorithm based on manifold learning and spatial information].

    PubMed

    Tang, Xiao-yan; Gao, Kun; Ni, Guo-qiang; Zhu, Zhen-yu; Cheng, Hao-bo

    2013-09-01

    An improved N-FINDR endmember extraction algorithm by combining manifold learning and spatial information is presented under nonlinear mixing assumptions. Firstly, adaptive local tangent space alignment is adapted to seek potential intrinsic low-dimensional structures of hyperspectral high-diemensional data and reduce original data into a low-dimensional space. Secondly, spatial preprocessing is used by enhancing each pixel vector in spatially homogeneous areas, according to the continuity of spatial distribution of the materials. Finally, endmembers are extracted by looking for the largest simplex volume. The proposed method can increase the precision of endmember extraction by solving the nonlinearity of hyperspectral data and taking advantage of spatial information. Experimental results on simulated and real hyperspectral data demonstrate that the proposed approach outperformed the geodesic simplex volume maximization (GSVM), vertex component analysis (VCA) and spatial preprocessing N-FINDR method (SPPNFINDR). PMID:24369664

  8. Improvements to the OMI Near-uv Aerosol Algorithm Using A-train CALIOP and AIRS Observations

    NASA Technical Reports Server (NTRS)

    Torres, O.; Ahn, C.; Zhong, C.

    2014-01-01

    The height of desert dust and carbonaceous aerosols layers and, to a lesser extent, the difficulty in assessing the predominant size mode of these absorbing aerosol types, are sources of uncertainty in the retrieval of aerosol properties from near UV satellite observations. The availability of independent, near-simultaneous measurements of aerosol layer height, and aerosol-type related parameters derived from observations by other A-train sensors, makes possible the direct use of these parameters as input to the OMI (Ozone Monitoring Instrument) near UV retrieval algorithm. A monthly climatology of aerosol layer height derived from observations by the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) sensor, and real-time AIRS (Atmospheric Infrared Sounder) CO observations are used in an upgraded version of the OMI near UV aerosol algorithm. AIRS CO measurements are used as a reliable tracer of carbonaceous aerosols, which allows the identification of smoke layers in areas and times of the year where the dust-smoke differentiation is difficult in the near-UV. The use of CO measurements also enables the identification of elevated levels of boundary layer pollution undetectable by near UV observations alone. In this paper we discuss the combined use of OMI, CALIOP and AIRS observations for the characterization of aerosol properties, and show a significant improvement in OMI aerosol retrieval capabilities.

  9. Improvement of fluorescence-enhanced optical tomography with improved optical filtering and accurate model-based reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I.-Chih; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-12-01

    The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.

  10. Autonomous Throughput Improvement Scheme Using Machine Learning Algorithms for Heterogeneous Wireless Networks Aggregation

    NASA Astrophysics Data System (ADS)

    Kon, Yohsuke; Hashiguchi, Kazuki; Ito, Masato; Hasegawa, Mikio; Ishizu, Kentaro; Murakami, Homare; Harada, Hiroshi

    It is important to optimize aggregation schemes for heterogeneous wireless networks for maximizing communication throughput utilizing any available radio access networks. In the heterogeneous networks, differences of the quality of service (QoS), such as throughput, delay and packet loss rate, of the networks makes difficult to maximize the aggregation throughput. In this paper, we firstly analyze influences of such differences in QoS to the aggregation throughput, and show that it is possible to improve the throughput by adjusting the parameters of an aggregation system. Since manual parameter optimization is difficult and takes much time, we propose an autonomous parameter tuning scheme using a machine learning algorithm for the heterogeneous wireless network aggregation. We implement the proposed scheme on a heterogeneous cognitive radio network system. The results on our experimental network with network emulators show that the proposed scheme can improve the aggregation throughput better than the conventional schemes. We also evaluate the performance using public wireless network services, such as HSDPA, WiMAX and W-CDMA, and verify that the proposed scheme can improve the aggregation throughput by iterating the learning cycle even for the public wireless networks. Our experimental results show that the proposed scheme achieves twice better aggregation throughput than the conventional schemes.

  11. Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit

    SciTech Connect

    Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph

    2015-09-01

    The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.

  12. An Improved Artificial Bee Colony Algorithm Based on Balance-Evolution Strategy for Unmanned Combat Aerial Vehicle Path Planning

    PubMed Central

    Gong, Li-gang; Yang, Wen-lun

    2014-01-01

    Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms. PMID:24790555

  13. An improved artificial bee colony algorithm based on balance-evolution strategy for unmanned combat aerial vehicle path planning.

    PubMed

    Li, Bai; Gong, Li-gang; Yang, Wen-lun

    2014-01-01

    Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms. PMID:24790555

  14. Significant improvement in electronic properties of transparent amorphous indium zinc oxide through yttrium doping

    NASA Astrophysics Data System (ADS)

    Sun, Jian; Yu, Zhigen; Huang, Yanhua; Xia, Yijie; Lai, Weng Soon; Gong, Hao

    2014-04-01

    One big challenge in transparent conducting oxides (TCOs) is to achieve high conductivity and mobility at a low processing temperature. Although optimized conductivity has been achieved in indium zinc oxide (IZO) without doping, it is still interesting to find whether doping can improve conductivity of IZO further. In this paper, we report a low processing temperature achievement of high conductivity and mobility of IZO through yttrium (Y) doping. We found that with different Y doping levels, room temperature fabricated amorphous IZO (a-IZO) samples can be controlled to exhibit either metallic or semiconductor characteristics. Y2O3 is demonstrated to be an effective doping source to achieve conductivity 300% higher than the non-doped IZO sample. Anomalously improved mobility of certain Y2O3-doped IZO samples compared with the non-doped IZO sample is found and analyzed. Besides, a low-temperature resistivity anomaly (semiconductor metal transition) phenomenon is observed and discussed.

  15. Brief Communication: Upper air relaxation in RACMO2 significantly improves modelled interannual SMB variability in Antarctica

    NASA Astrophysics Data System (ADS)

    van de Berg, W. J.; Medley, B.

    2015-09-01

    The regional climate model RACMO2 has been a powerful tool for improving SMB estimates from GCMs or reanalyses. However, new yearly SMB observations for West Antarctica show that the modelled interannual variability in SMB is poorly simulated by RACMO2, in contrast to ERA-Interim, which resolves this variability well. In an attempt to remedy RACMO2 performance, we included additional upper air relaxation (UAR) in RACMO2. With UAR, the correlation to observations is similar for RACMO2 and ERA-Interim. The spatial SMB patterns and ice sheet integrated SMB modelled using UAR remain very similar to the estimates of RACMO2 without UAR. We only observe an upstream smoothing of precipitation in regions with very steep topography like the Antarctic Peninsula. We conclude that UAR is a useful improvement for RCM simulations, although results in regions with steep topography should be treated with care.

  16. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-01-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM). PMID:27258285

  17. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-01-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM). PMID:27258285

  18. Simulation of Long Lived Tracers Using an Improved Empirically Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal

  19. Significant Advancements in Technology to Improve Instruction for All Students: Including Those with Disabilities

    ERIC Educational Resources Information Center

    Meyen, Edward

    2015-01-01

    Sharing thoughts on what represents significant advancements involving the education of persons for whom typical instruction is not effective seems simple enough. You think about the work you are engaged in and reflect on how you came to do what you are doing. If you have a record of being persistent in your work, then that becomes the context for…

  20. Targeted agri-environment schemes significantly improve the population size of common farmland bumblebee species.

    PubMed

    Wood, Thomas J; Holland, John M; Hughes, William O H; Goulson, Dave

    2015-04-01

    Changes in agricultural practice across Europe and North America have been associated with range contractions and local extinction of bumblebees (Bombus spp.). A number of agri-environment schemes have been implemented to halt and reverse these declines, predominantly revolving around the provision of additional forage plants. Although it has been demonstrated that these schemes can attract substantial numbers of foraging bumblebees, it remains unclear to what extent they actually increase bumblebee populations. We used standardized transect walks and molecular techniques to compare the size of bumblebee populations between Higher Level Stewardship (HLS) farms implementing pollinator-friendly schemes and Entry Level Stewardship (ELS) control farms. Bumblebee abundance on the transect walks was significantly higher on HLS farms than ELS farms. Molecular analysis suggested maximum foraging ranges of 566 m for Bombus hortorum, 714 m for B. lapidarius, 363 m for B. pascuorum and 799 m for B. terrestris. Substantial differences in maximum foraging range were found within bumblebee species between farm types. Accounting for foraging range differences, B. hortorum (47 vs 13 nests/km(2) ) and B. lapidarius (45 vs 22 nests/km(2) ) were found to nest at significantly greater densities on HLS farms than ELS farms. There were no significant differences between farm type for B. terrestris (88 vs 38 nests/km(2) ) and B. pascuorum (32 vs 39 nests/km(2) ). Across all bumblebee species, HLS management had a significantly positive effect on bumblebee nest density. These results show that targeted agri-environment schemes that increase the availability of suitable forage can significantly increase the size of wild bumblebee populations. PMID:25753513

  1. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Sen, Ramazan Sonat; Smith, Curtis Lee

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  2. Improved methodology for surface and atmospheric soundings, error estimates, and quality control procedures: the atmospheric infrared sounder science team version-6 retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Susskind, Joel; Blaisdell, John M.; Iredell, Lena

    2014-01-01

    The atmospheric infrared sounder (AIRS) science team version-6 AIRS/advanced microwave sounding unit (AMSU) retrieval algorithm is now operational at the Goddard Data and Information Services Center (DISC). AIRS version-6 level-2 products are generated near real time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. Some of the significant improvements in retrieval methodology contained in the version-6 retrieval algorithm compared to that previously used in version-5 are described. In particular, the AIRS science team made major improvements with regard to the algorithms used to (1) derive surface skin temperature and surface spectral emissivity; (2) generate the initial state used to start the cloud clearing and retrieval procedures; and (3) derive error estimates and use them for quality control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, version-6 also operates in an AIRS only (AO) mode, which produces results almost as good as those of the full AIRS/AMSU mode. The improvements of some AIRS version-6 and version-6 AO products compared to those obtained using version-5 are also demonstrated.

  3. Improved Methodology for Surface and Atmospheric Soundings, Error Estimates, and Quality Control Procedures: the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2014-01-01

    The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.

  4. High Resolution Direction of Arrival (DOA) Estimation Based on Improved Orthogonal Matching Pursuit (OMP) Algorithm by Iterative Local Searching

    PubMed Central

    Wang, Wenyi; Wu, Renbiao

    2013-01-01

    DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm. PMID:23974150

  5. High resolution direction of arrival (DOA) estimation based on improved orthogonal matching pursuit (OMP) algorithm by iterative local searching.

    PubMed

    Wang, Wenyi; Wu, Renbiao

    2013-01-01

    DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm. PMID:23974150

  6. Significant Improvement of Metabolic Characteristics and Bioactivities of Clopidogrel and Analogs by Selective Deuteration.

    PubMed

    Xu, Xueyu; Zhao, Xue; Yang, Zhichao; Wang, Hao; Meng, Xiangjun; Su, Chong; Liu, Mingyuan; Fawcett, John Paul; Yang, Yan; Gu, Jingkai

    2016-01-01

    In the search for prodrug analogs of clopidogrel with improved metabolic characteristics and antiplatelet bioactivity, a group of clopidogrel and vicagrel analogs selectively deuterated at the benzylic methyl ester group were synthesized, characterized, and evaluated. The compounds included clopidogrel-d₃ (8), 2-oxoclopidogrel-d₃ (9), vicagrel-d₃ (10a), and 12 vicagrel-d₃ analogs (10b-10m) with different alkyl groups in the thiophene ester moiety. The D₃C-O bond length in 10a was shown by X-ray single crystal diffraction to be shorter than the H₃C-O bond length in clopidogrel, consistent with the slower rate of hydrolysis of 8 than of clopidogrel in rat whole blood in vitro. A study of the ability of the compounds to inhibit ADP-induced platelet aggregation in fresh rat whole blood collected 2 h after oral dosing of rats with the compounds (7.8 μmol/kg) showed that deuteration increased the activity of clopidogrel and that increasing the size of the alkyl group in the thiophene ester moiety reduced activity. A preliminary pharmacokinetic study comparing 10a with vicagrel administered simultaneously as single oral doses (72 μmol/kg of each drug) to male Wistar rats showed 10a generated more of its active metabolite than vicagrel. These results suggest that 10a is a potentially superior antiplatelet agent with improved metabolic characteristics and bioactivity, and less dose-related toxicity. PMID:27248988

  7. Significant improvement of mouse cloning technique by treatment with trichostatin A after somatic nuclear transfer

    SciTech Connect

    Kishigami, Satoshi . E-mail: kishigami@cdb.riken.jp; Mizutani, Eiji; Ohta, Hiroshi; Hikichi, Takafusa; Thuan, Nguyen Van; Wakayama, Sayaka; Bui, Hong-Thuy; Wakayama, Teruhiko

    2006-02-03

    The low success rate of animal cloning by somatic cell nuclear transfer (SCNT) is believed to be associated with epigenetic errors including abnormal DNA hypermethylation. Recently, we elucidated by using round spermatids that, after nuclear transfer, treatment of zygotes with trichostatin A (TSA), an inhibitor of histone deacetylase, can remarkably reduce abnormal DNA hypermethylation depending on the origins of transferred nuclei and their genomic regions [S. Kishigami, N. Van Thuan, T. Hikichi, H. Ohta, S. Wakayama. E. Mizutani, T. Wakayama, Epigenetic abnormalities of the mouse paternal zygotic genome associated with microinsemination of round spermatids, Dev. Biol. (2005) in press]. Here, we found that 5-50 nM TSA-treatment for 10 h following oocyte activation resulted in more efficient in vitro development of somatic cloned embryos to the blastocyst stage from 2- to 5-fold depending on the donor cells including tail tip cells, spleen cells, neural stem cells, and cumulus cells. This TSA-treatment also led to more than 5-fold increase in success rate of mouse cloning from cumulus cells without obvious abnormality but failed to improve ES cloning success. Further, we succeeded in establishment of nuclear transfer-embryonic stem (NT-ES) cells from TSA-treated cloned blastocyst at a rate three times higher than those from untreated cloned blastocysts. Thus, our data indicate that TSA-treatment after SCNT in mice can dramatically improve the practical application of current cloning techniques.

  8. An Improved Algorithm for Linear Inequalities in Pattern Recognition and Switching Theory.

    ERIC Educational Resources Information Center

    Geary, Leo C.

    This thesis presents a new iterative algorithm for solving an n by l solution vector w, if one exists, to a set of linear inequalities, A w greater than zero which arises in pattern recognition and switching theory. The algorithm is an extension of the Ho-Kashyap algorithm, utilizing the gradient descent procedure to minimize a criterion function…

  9. Significant improvements in the area of stroke timing of motor-operated valves for nuclear plants

    SciTech Connect

    Wohld, P.R. ); Newsome, R.C. )

    1990-01-01

    This paper reports on valve stroke timing test equipment developed and tested for use in a nuclear power plant main control room that can provide significant advantages to the user for valve surveillance testing required by the Nuclear Regulatory Commission. The equipment is particularly suitable for Motor-Operated Valves (MOVs) because of its accuracy and repeatability that is necessary to detect the effects of small changes in actuator motor RPM.

  10. An improved method to set significance thresholds for β diversity testing in microbial community comparisons.

    PubMed

    Gülay, Arda; Smets, Barth F

    2015-09-01

    Exploring the variation in microbial community diversity between locations (β diversity) is a central topic in microbial ecology. Currently, there is no consensus on how to set the significance threshold for β diversity. Here, we describe and quantify the technical components of β diversity, including those associated with the process of subsampling. These components exist for any proposed β diversity measurement procedure. Further, we introduce a strategy to set significance thresholds for β diversity of any group of microbial samples using rarefaction, invoking the notion of a meta-community. The proposed technique was applied to several in silico generated operational taxonomic unit (OTU) libraries and experimental 16S rRNA pyrosequencing libraries. The latter represented microbial communities from different biological rapid sand filters at a full-scale waterworks. We observe that β diversity, after subsampling, is inflated by intra-sample differences; this inflation is avoided in the proposed method. In addition, microbial community evenness (Gini > 0.08) strongly affects all β diversity estimations due to bias associated with rarefaction. Where published methods to test β significance often fail, the proposed meta-community-based estimator is more successful at rejecting insignificant β diversity values. Applying our approach, we reveal the heterogeneous microbial structure of biological rapid sand filters both within and across filters. PMID:25534614

  11. Upper gastrointestinal bleeding in Scotland 2000-2010: Improved outcomes but a significant weekend effect

    PubMed Central

    Ahmed, Asma; Armstrong, Matthew; Robertson, Ishbel; Morris, Allan John; Blatchford, Oliver; Stanley, Adrian J

    2015-01-01

    AIM: To assess numbers and case fatality of patients with upper gastrointestinal bleeding (UGIB), effects of deprivation and whether weekend presentation affected outcomes. METHODS: Data was obtained from Information Services Division (ISD) Scotland and National Records of Scotland (NRS) death records for a ten year period between 2000-2001 and 2009-2010. We obtained data from the ISD Scottish Morbidity Records (SMR01) database which holds data on inpatient and day-case hospital discharges from non-obstetric and non-psychiatric hospitals in Scotland. The mortality data was obtained from NRS and linked with the ISD SMR01 database to obtain 30-d case fatality. We used 23 ICD-10 (International Classification of diseases) codes which identify UGIB to interrogate database. We analysed these data for trends in number of hospital admissions with UGIB, 30-d mortality over time and assessed effects of social deprivation. We compared weekend and weekday admissions for differences in 30-d mortality and length of hospital stay. We determined comorbidities for each admission to establish if comorbidities contributed to patient outcome. RESULTS: A total of 60643 Scottish residents were admitted with UGIH during January, 2000 and October, 2009. There was no significant change in annual number of admissions over time, but there was a statistically significant reduction in 30-d case fatality from 10.3% to 8.8% (P < 0.001) over these 10 years. Number of admissions with UGIB was higher for the patients from most deprived category (P < 0.05), although case fatality was higher for the patients from the least deprived category (P < 0.05). There was no statistically significant change in this trend between 2000/01-2009/10. Patients admitted with UGIB at weekends had higher 30-d case fatality compared with those admitted on weekdays (P < 0.001). Thirty day mortality remained significantly higher for patients admitted with UGIB at weekends after adjusting for comorbidities. Length of

  12. Lithium deficient mesoporous Li2-xMnSiO4 with significantly improved electrochemical performance

    NASA Astrophysics Data System (ADS)

    Wang, Haiyan; Hou, Tianli; Sun, Dan; Huang, Xiaobing; He, Hanna; Tang, Yougen; Liu, Younian

    2014-02-01

    Li2-xMnSiO4 compounds with mesoporous structure are first proposed in the present work. It is interesting to note that the lithium deficient compounds exhibit much higher electrochemical performance in comparison with the stoichiometric one. Among these compounds, Li1.8MnSiO4 shows the best electrochemical performance. It is found that mesoporous Li1.8MnSiO4 without carbon coating delivers a maximum discharge capacity of 110.9 mAh g-1 at 15 mA g-1, maintaining 90.8 mAh g-1 after 25 cycles, while that of the stoichiometric one is only 48.0 mAh g-1, with 12.5 mAh g-1 remaining. The superior properties are mainly due to the great improvement of electronic conductivity and structure stability, as well as suppressed charge-transfer resistance.

  13. Resolution-adapted recombination of structural features significantly improves sampling in restraint-guided structure calculation

    PubMed Central

    Lange, Oliver F; Baker, David

    2012-01-01

    Recent work has shown that NMR structures can be determined by integrating sparse NMR data with structure prediction methods such as Rosetta. The experimental data serve to guide the search for the lowest energy state towards the deep minimum at the native state which is frequently missed in Rosetta de novo structure calculations. However, as the protein size increases, sampling again becomes limiting; for example, the standard Rosetta protocol involving Monte Carlo fragment insertion starting from an extended chain fails to converge for proteins over 150 amino acids even with guidance from chemical shifts (CS-Rosetta) and other NMR data. The primary limitation of this protocol—that every folding trajectory is completely independent of every other—was recently overcome with the development of a new approach involving resolution-adapted structural recombination (RASREC). Here we describe the RASREC approach in detail and compare it to standard CS-Rosetta. We show that the improved sampling of RASREC is essential in obtaining accurate structures over a benchmark set of 11 proteins in the 15-25 kDa size range using chemical shifts, backbone RDCs and HN-HN NOE data; in a number of cases the improved sampling methodology makes a larger contribution than incorporation of additional experimental data. Experimental data are invaluable for guiding sampling to the vicinity of the global energy minimum, but for larger proteins, the standard Rosetta fold-from-extended-chain protocol does not converge on the native minimum even with experimental data and the more powerful RASREC approach is necessary to converge to accurate solutions. PMID:22423358

  14. Improved Genetic Algorithm Based on the Cooperation of Elite and Inverse-elite

    NASA Astrophysics Data System (ADS)

    Kanakubo, Masaaki; Hagiwara, Masafumi

    In this paper, we propose an improved genetic algorithm based on the combination of Bee system and Inverse-elitism, both are effective strategies for the improvement of GA. In the Bee system, in the beginning, each chromosome tries to find good solution individually as global search. When some chromosome is regarded as superior one, the other chromosomes try to find solution around there. However, since chromosomes for global search are generated randomly, Bee system lacks global search ability. On the other hand, in the Inverse-elitism, an inverse-elite whose gene values are reversed from the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes, but it lacks local search ability. In the proposed method, the Inverse-elitism with Pseudo-simplex method is employed for global search of Bee system in order to strengthen global search ability. In addition, it also has strong local search ability. The proposed method has synergistic effects of the three strategies. We confirmed validity and superior performance of the proposed method by computer simulations.

  15. New algorithm for integration between wireless microwave sensor network and radar for improved rainfall measurement and mapping

    NASA Astrophysics Data System (ADS)

    Liberman, Y.; Samuels, R.; Alpert, P.; Messer, H.

    2014-05-01

    One of the main challenges for meteorological and hydrological modelling is accurate rainfall measurement and mapping across time and space. To date the most effective methods for large scale rainfall estimates are radar, satellites, and more recently, received signal level (RSL) measurements received from commercial microwave networks (CMN). While these methods provide improved spatial resolution over traditional rain gauges, these have their limitations as well. For example, the wireless CMN, which are comprised of microwave links (ML), are dependant upon existing infrastructure, and the ML arbitrary distribution in space. Radar, on the other hand, is known in its limitation in accurately estimating rainfall in urban regions, clutter areas and distant locations. In this paper the pros and cons of the radar and ML methods are considered in order to develop a new algorithm for improving rain fall measurement and mapping, which is based on data fusion of the different sources. The integration is based on an optimal weighted average of the two data sets, taking into account location, number of links, rainfall intensity and time step. Our results indicate that by using the proposed new method we not only generate a more accurate 2-D rainfall reconstructions, compared with actual rain intensities in space, but also the reconstructed maps are extended to the maximum coverage area. By inspecting three significant rain events, we show an improvement of rain rate estimation over CMN or radar alone, almost uniformly, both for instantaneous spatial measurements, as well as in calculating total accumulated rainfall. These new improved 2-D rainfall maps, and the accurate rainfall measurements over large areas at sub-hourly time scales, will allow for improved understanding, initialization and calibration of hydrological and meteorological models necessary, mainly, for water resource management and planning.

  16. A patient/family-centered strategic plan can drive significant improvement.

    PubMed

    Brilli, Richard J; Crandall, Wallace V; Berry, Janet C; Stoverock, Linda; Rosen, Kerry; Budin, Lee; Kelleher, Kelly J; Gleeson, Sean P; Davis, J Terrance

    2014-08-01

    The use of a PFCSP, as a road map to operationalize the hospital's vision, has been a compelling paradigm to achieve significant QI results. The framework is simple yet directly aligns with the IOM domains of quality. It has inspired and helped actively engage hospital personnel in the work required to achieve the goals and vision of the hospital system. Five years after initiating this type of plan, activity is flourishing in each of the domains and midterm results are substantial. We think that the nature of this strategic plan has been an important aspect of our success to date. PMID:25037128

  17. Improved Algorithms for Radar-Based Reconstruction of Asteroid Spin States and Shapes

    NASA Astrophysics Data System (ADS)

    Greenberg, Adam; Margot, Jean-Luc

    2015-11-01

    Earth-based radar is a powerful tool for gathering information about bodies in the Solar System. Radar observations can dramatically improve the determination of the physical properties and orbital elements of small bodies (such as asteroids and comets). An important development in the past two decades has been the formulation and implementation of algorithms for asteroid shape reconstruction based on radar data.Because of the nature of radar data, recovery of the spin state depends on knowledge of the shape and vice versa. Even with perfect spin state information, certain peculiarities of radar images (such as the two-to-one or several-to-one mapping between surface elements on the object and pixels within the radar image) make recovery of the physical shape challenging. This is a computationally intensive problem, potentially involving hundreds to thousands of free parameters and millions of data points.The method by which radar-based shape and spin state modelling is currently accomplished, a Sequential Parameter Fit (SPF), is relatively slow, and incapable of determining the spin state of an asteroid from radar images without substantial user intervention.We implemented a global-parameter optimizer and Square Root Information Filter (SRIF) into the asteroid-modelling software shape. This optimizer can find shapes more quickly than the current method and can determine the asteroid’s spin state.We ran our new algorithm, along with the existing SPF, through several tests, composed of both real and simulated data. The simulated data were composed of noisy images of procedurally generated shapes, as well as noisy images of existing shape models. The real data included recent observations of both 2000 ET70 and 1566 Icarus.These tests indicate that SRIF is faster and more accurate than SPF. In addition, SRIF can autonomously determine the spin state of an asteroid from a variety of starting conditions, a considerable advance over the existing algorithm. We will

  18. Active loading into extracellular vesicles significantly improves the cellular uptake and photodynamic effect of porphyrins.

    PubMed

    Fuhrmann, Gregor; Serio, Andrea; Mazo, Manuel; Nair, Rekha; Stevens, Molly M

    2015-05-10

    Extracellular vesicles (EVs) are phospholipid-based particles endogenously produced by cells. Their natural composition and selective cell interactions make them promising drug carriers. However, in order to harness their properties, efficient exogenous drug encapsulation methods need to be investigated. Here, EVs from various cellular origins (endothelial, cancer and stem cells) were produced and characterised for size and composition. Porphyrins of different hydrophobicities were employed as model drugs and encapsulated into EVs using various passive and active methods (electroporation, saponin, extrusion and dialysis). Hydrophobic compounds loaded very efficiently into EVs and at significantly higher amounts than into standard liposomes composed of phosphocholine and cholesterol using passive incubation. Moreover, loading into EVs significantly increased the cellular uptake by >60% and the photodynamic effect of hydrophobic porphyrins in vitro compared to free or liposome encapsulated drug. The active encapsulation techniques, with the saponin-assisted method in particular, allowed an up to 11 fold higher drug loading of hydrophilic porphyrins compared to passive methods. EVs loaded with hydrophilic porphyrins induced a stronger phototoxic effect than free drug in a cancer cell model. Our findings create a firm basis for the development of EVs as smart drug carriers based on straightforward and transferable methods. PMID:25483424

  19. The Doylestown Algorithm: A Test to Improve the Performance of AFP in the Detection of Hepatocellular Carcinoma.

    PubMed

    Wang, Mengjun; Devarajan, Karthik; Singal, Amit G; Marrero, Jorge A; Dai, Jianliang; Feng, Ziding; Rinaudo, Jo Ann S; Srivastava, Sudhir; Evans, Alison; Hann, Hie-Won; Lai, Yinzhi; Yang, Hushan; Block, Timothy M; Mehta, Anand

    2016-02-01

    Biomarkers for the early diagnosis of hepatocellular carcinoma (HCC) are needed to decrease mortality from this cancer. However, as new biomarkers have been slow to be brought to clinical practice, we have developed a diagnostic algorithm that utilizes commonly used clinical measurements in those at risk of developing HCC. Briefly, as α-fetoprotein (AFP) is routinely used, an algorithm that incorporated AFP values along with four other clinical factors was developed. Discovery analysis was performed on electronic data from patients who had liver disease (cirrhosis) alone or HCC in the background of cirrhosis. The discovery set consisted of 360 patients from two independent locations. A logistic regression algorithm was developed that incorporated log-transformed AFP values with age, gender, alkaline phosphatase, and alanine aminotransferase levels. We define this as the Doylestown algorithm. In the discovery set, the Doylestown algorithm improved the overall performance of AFP by 10%. In subsequent external validation in over 2,700 patients from three independent sites, the Doylestown algorithm improved detection of HCC as compared with AFP alone by 4% to 20%. In addition, at a fixed specificity of 95%, the Doylestown algorithm improved the detection of HCC as compared with AFP alone by 2% to 20%. In conclusion, the Doylestown algorithm consolidates clinical laboratory values, with age and gender, which are each individually associated with HCC risk, into a single value that can be used for HCC risk assessment. As such, it should be applicable and useful to the medical community that manages those at risk for developing HCC. PMID:26712941

  20. Improved perception of music with a harmonic based algorithm for cochlear implants.

    PubMed

    Li, Xing; Nie, Kaibao; Imennov, Nikita S; Rubinstein, Jay T; Atlas, Les E

    2013-07-01

    The lack of fine structure information in conventional cochlear implant (CI) encoding strategies presumably contributes to the generally poor music perception with CIs. To improve CI users' music perception, a harmonic-single-sideband-encoder (HSSE) strategy was developed , which explicitly tracks the harmonics of a single musical source and transforms them into modulators conveying both amplitude and temporal fine structure cues to electrodes. To investigate its effectiveness, vocoder simulations of HSSE and the conventional continuous-interleaved-sampling (CIS) strategy were implemented. Using these vocoders, five normal-hearing subjects' melody and timbre recognition performance were evaluated: a significant benefit of HSSE to both melody (p < 0.002) and timbre (p < 0.026) recognition was found. Additionally, HSSE was acutely tested in eight CI subjects. On timbre recognition, a significant advantage of HSSE over the subjects' clinical strategy was demonstrated: the largest improvement was 35% and the mean 17% (p < 0.013). On melody recognition, two subjects showed 20% improvement with HSSE; however, the mean improvement of 7% across subjects was not significant (p > 0.090). To quantify the temporal cues delivered to the auditory nerve, the neural spike patterns evoked by HSSE and CIS for one melody stimulus were simulated using an auditory nerve model. Quantitative analysis demonstrated that HSSE can convey temporal pitch cues better than CIS. The results suggest that HSSE is a promising strategy to enhance music perception with CIs. PMID:23613083

  1. Possible breakthrough: Significant improvement of signal to noise ratio by stochastic resonance

    SciTech Connect

    Kiss, L.B.

    1996-06-01

    The {ital simplest} {ital stochastic} {ital resonator} {ital is} {ital used}, {ital a} {ital level} {ital crossing} {ital detector} (LCD), to investigate key properties of stochastic resonance (SR). It is pointed out that successful signal processing and biological applications of SR require to work in the {ital large} {ital signal} {ital limit} (nonlinear transfer limit) which requires a completely new approach: {ital wide} {ital band} {ital input} {ital signal} and a {ital new}, {ital generalised} {ital definition} {ital of} {ital output} {ital noise}. The new way of approach is illustrated by a new arrangement. The arrangement employs a special LCD, white input noise and a special, large, subthreshold wide band signal. {ital First} {ital time} {ital in} {ital the} {ital history} {ital of} {ital SR} (for a wide band input noise), the {ital signal} {ital to} {ital noise} {ital ratio} {ital becomes} {ital much} {ital higher} {ital at} {ital the} {ital output} of a stochastic resonator than {ital at} {ital its} {ital input}. In that way, SR is proven to have a potential to improve signal transfer. Note, that the new arrangement seems to have resemblance to {ital neurone} {ital models}, therefore, it has a potential also for biological applications. {copyright} {ital 1996 American Institute of Physics.}

  2. Three-phase textile nanocomposites: significant improvements in strength, toughness and ductility.

    PubMed

    Srivastava, Iti; Proper, Andrew; Rafiee, Mohammad A; Koratkar, Nikhil

    2010-02-01

    It is well established that in-plane tensile properties of unidirectional microfiber-reinforced composites are not significantly influenced by addition of carbon nanotubes to the matrix. This is because the principal effect of the nanotubes is to enhance the matrix dominated (out-of-plane) properties. Here we report that the above situation changes when nanotubes are incorporated into woven-fabric (textile) composites. We report up to 200% increase in strain-to-break and 180% increase in toughness under in-plane tensile load with approximately 0.05% weight of nanotube additives. We attribute this effect to the geometrical arrangement of the micro-fibers and the critical role of the pure-matrix-block in textile composites. PMID:20352752

  3. Global regulator engineering significantly improved Escherichia coli tolerances toward inhibitors of lignocellulosic hydrolysates.

    PubMed

    Wang, Jianqing; Zhang, Yan; Chen, Yilu; Lin, Min; Lin, Zhanglin

    2012-12-01

    Lignocellulosic biomass is regarded as the most viable source of feedstock for industrial biorefinery, but the harmful inhibitors generated from the indispensable pretreatments prior to fermentation remain a daunting technical hurdle. Using an exogenous regulator, irrE, from the radiation-resistant Deinococcus radiodurans, we previously showed that a novel global regulator engineering (GRE) approach significantly enhanced tolerances of Escherichia coli to alcohol and acetate stresses. In this work, an irrE library was subjected to selection under various stresses of furfural, a typical hydrolysate inhibitor. Three furfural tolerant irrE mutants including F1-37 and F2-1 were successfully obtained. The cells containing these mutants reached OD(600) levels of 4- to 16-fold of that for the pMD18T cells in growth assay under 0.2% (v/v) furfural stress. The cells containing irrE F1-37 and F2-1 also showed considerably reduced intracellular oxygen species (ROS) levels under furfural stress. Moreover, these two irrE mutants were subsequently found to confer significant cross tolerances to two other most common inhibitors, 5-hydroxymethyl-2-furaldehyde (HMF), vanillin, as well as real lignocellulosic hydrolysates. When evaluated in Luria-Bertani (LB) medium supplemented with corn stover cellulosic hydrolysate (prepared with a solid loading of 30%), the cells containing the mutants exhibited lag phases markedly shortened by 24-44 h in comparison with the control cells. This work thus presents a promising step forward to resolve the inhibitor problem for E. coli. From the view of synthetic biology, irrE can be considered as an evolvable "part" for various stresses. Furthermore, this GRE approach can be extended to exploit other exogenous global regulators from extremophiles, and the native counterparts in E. coli, for eliciting industrially useful phenotypes. PMID:22684885

  4. Oxamflatin Significantly Improves Nuclear Reprogramming, Blastocyst Quality, and In Vitro Development of Bovine SCNT Embryos

    PubMed Central

    Li, Yanyan; Li, Ruizhe; Li, Qian; Wu, Yongyan; Quan, Fusheng; Liu, Jun; Guo, Zekun; Zhang, Yong

    2011-01-01

    Aberrant epigenetic nuclear reprogramming results in low somatic cloning efficiency. Altering epigenetic status by applying histone deacetylase inhibitors (HDACi) enhances developmental potential of somatic cell nuclear transfer (SCNT) embryos. The present study was carried out to examine the effects of Oxamflatin, a novel HDACi, on the nuclear reprogramming and development of bovine SCNT embryos in vitro. We found that Oxamflatin modified the acetylation status on H3K9 and H3K18, increased total and inner cell mass (ICM) cell numbers and the ratio of ICM∶trophectoderm (TE) cells, reduced the rate of apoptosis in SCNT blastocysts, and significantly enhanced the development of bovine SCNT embryos in vitro. Furthermore, Oxamflatin treatment suppressed expression of the pro-apoptotic gene Bax and stimulated expression of the anti-apoptotic gene Bcl-XL and the pluripotency-related genes OCT4 and SOX2 in SCNT blastocysts. Additionally, the treatment also reduced the DNA methylation level of satellite I in SCNT blastocysts. In conclusion, Oxamflatin modifies epigenetic status and gene expression, increases blastocyst quality, and subsequently enhances the nuclear reprogramming and developmental potential of SCNT embryos. PMID:21912607

  5. A Cell Type Independent Binary Grading System Does Not Significantly Improve Endometrial Biopsy Interpretation.

    PubMed

    Nastic, Denis; Kahlin, Frida; Dahlstrand, Hanna; Carlson, Joseph W

    2016-05-01

    The revised International Federation of Gynecology and Obstetrics (FIGO) grading system is widely accepted as the standard in evaluating endometrial carcinoma on biopsy. Determination of tumor cell type [using the World Health Organization (WHO) diagnostic criteria] and grade (using FIGO) guides surgical approach. Several studies have highlighted discrepancies between biopsy and hysterectomy diagnosis. Recently, a binary grading system was proposed, yielding a low-risk and high-risk assessment but in a cell type independent (CTI) way. No study has assessed its utility in biopsy grading, a situation where this system may be particularly useful. Archived endometrial biopsies from 70 cases of endometrial carcinoma were graded by 3 independent observers using the WHO/FIGO and the CTI grading systems. The overall accuracy, interobserver agreement, and ease of use were assessed. This study found comparable substantial accuracy between the WHO/FIGO and CTI grading systems (κ=0.71 vs. κ=0.69), with the same setbacks in overgrading of 20.9% versus 25.6% of low-risk tumors. The CTI grading system was not superior to the WHO/FIGO grading system in accuracy of subtyping and grading and interobserver reproducibility. Although determination of cell type is difficult, it does not appear that the proposed CTI system confers any significant advantages over existing grading. PMID:26863477

  6. Under-reporting of notifiable infectious disease hospitalizations: significant improvements in the Irish context.

    PubMed

    Brabazon, E D; Sheridan, A; Finnegan, P; Carton, M W; Bedford, D

    2015-04-01

    Notification of infectious disease is essential for prompt public health action and epidemiological analysis. The aim of this study was to compare national hospitalization data to national notification data in order to assess if there was significant under-reporting of hospitalized notifiable infectious diseases in recent years in Ireland. All in-patient discharges from public hospitals in the Republic of Ireland from 2006 to 2011 with a principal diagnosis of a notifiable disease were compared with national notification data. It was found that only a potential 1·8% of extra notifications could have arisen due to these hospitalization events and would represent a tenfold reduction on a previous estimate of under-reporting in the Irish context. Viral meningitis, viral encephalitis, bacterial meningitis not otherwise specified and malaria were the most common diseases for which there were more hospitalizations than notifications reported. The results of this study support the conclusion that the reduction in under-reporting can mainly be accounted for by the introduction of laboratories as notifiers in conjunction with the roll out of the Computerized Infectious Disease Reporting system (CIDR). However, for the diseases highlighted, the notification data underestimates the true burden of disease and this has implications for understanding the epidemiology of these diseases. PMID:25035904

  7. Initial guess by improved population-based intelligent algorithms for large inter-frame deformation measurement using digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhao, Jia-qing; Zeng, Pan; Lei, Li-ping; Ma, Yuan

    2012-03-01

    Digital image correlation (DIC) has received a widespread research and application in experimental mechanics. In DIC, the performance of subpixel registration algorithm (e.g., Newton-Raphson method, quasi-Newton method) relies heavily on the initial guess of deformation. In the case of small inter-frame deformation, the initial guess could be found by simple search scheme, the coarse-fine search for instance. While for large inter-frame deformation, it is difficult for simple search scheme to robustly estimate displacement parameters and deformation parameters simultaneously with low computational cost. In this paper, we proposed three improving strategies, i.e. Q-stage evolutionary strategy (T), parameter control strategy (C) and space expanding strategy (E), and then combined them into three population-based intelligent algorithms (PIAs), i.e. genetic algorithm (GA), differential evolution (DE) and particle swarm optimization (PSO), and finally derived eighteen different algorithms to calculate the initial guess for qN. The eighteen algorithms were compared in three sets of experiments including large rigid body translation, finite uniaxial strain and large rigid body rotation, and the results showed the effectiveness of proposed improving strategies. Among all compared algorithms, DE-TCE is the best which is robust, convenient and efficient for large inter-frame deformation measurement.

  8. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    PubMed

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification. PMID:23798644

  9. Routine Testing for Anaerobic Bacteria in Cerebrospinal Fluid Cultures Improves Recovery of Clinically Significant Pathogens

    PubMed Central

    Pittman, Meredith E.; Thomas, Benjamin S.; Wallace, Meghan A.; Weber, Carol J.

    2014-01-01

    In North America, the widespread use of vaccines targeting Haemophilus influenzae type b and Streptococcus pneumoniae have dramatically altered the epidemiology of bacterial meningitis, while the methodology for culturing cerebrospinal fluid (CSF) specimens has remained largely unchanged. The aims of this study were 2-fold: to document the current epidemiology of bacterial meningitis at a tertiary care medical center and to assess the clinical utility of routinely querying for anaerobes in CSF cultures. To that end, we assessed CSF cultures submitted over a 2-year period. A brucella blood agar (BBA) plate, incubated anaerobically for 5 days, was included in the culture procedure for all CSF specimens during the second year of evaluation. In the pre- and postimplementation years, 2,353 and 2,302 CSF specimens were cultured, with 49 and 99 patients having positive culture results, respectively. The clinical and laboratory data for patients with positive cultures were reviewed. Anaerobic bacteria were isolated in the CSF samples from 33 patients post-BBA compared to two patients pre-BBA (P = 0.01). The anaerobic isolates included Bacteroides thetaiotaomicron (n = 1), Propionibacterium species (n = 15), and Propionibacterium acnes (n = 19) isolates; all of these isolates were recovered on the BBA. Eight of the 35 patients from whom anaerobic organisms were isolated received antimicrobial therapy. Although six of these patients had central nervous system hardware, two patients did not have a history of a neurosurgical procedure and had community-acquired anaerobic bacterial meningitis. This study demonstrates that the simple addition of an anaerobically incubated BBA to the culture of CSF specimens enhances the recovery of clinically significant anaerobic pathogens. PMID:24622102

  10. New algorithm for integration between wireless microwave sensor network and radar for improved rainfall measurement and mapping

    NASA Astrophysics Data System (ADS)

    Liberman, Y.; Samuels, R.; Alpert, P.; Messer, H.

    2014-10-01

    One of the main challenges for meteorological and hydrological modelling is accurate rainfall measurement and mapping across time and space. To date, the most effective methods for large-scale rainfall estimates are radar, satellites, and, more recently, received signal level (RSL) measurements derived from commercial microwave networks (CMNs). While these methods provide improved spatial resolution over traditional rain gauges, they have their limitations as well. For example, wireless CMNs, which are comprised of microwave links (ML), are dependant upon existing infrastructure and the ML' arbitrary distribution in space. Radar, on the other hand, is known in its limitation for accurately estimating rainfall in urban regions, clutter areas and distant locations. In this paper the pros and cons of the radar and ML methods are considered in order to develop a new algorithm for improving rainfall measurement and mapping, which is based on data fusion of the different sources. The integration is based on an optimal weighted average of the two data sets, taking into account location, number of links, rainfall intensity and time step. Our results indicate that, by using the proposed new method, we not only generate more accurate 2-D rainfall reconstructions, compared with actual rain intensities in space, but also the reconstructed maps are extended to the maximum coverage area. By inspecting three significant rain events, we show that our method outperforms CMNs or the radar alone in rain rate estimation, almost uniformly, both for instantaneous spatial measurements, as well as in calculating total accumulated rainfall. These new improved 2-D rainfall maps, as well as the accurate rainfall measurements over large areas at sub-hourly timescales, will allow for improved understanding, initialization, and calibration of hydrological and meteorological models mainly necessary for water resource management and planning.

  11. Multi-objective optimization in spatial planning: Improving the effectiveness of multi-objective evolutionary algorithms (non-dominated sorting genetic algorithm II)

    NASA Astrophysics Data System (ADS)

    Karakostas, Spiros

    2015-05-01

    The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.

  12. Use of motion estimation algorithms for improved flux measurements using SO2 cameras

    NASA Astrophysics Data System (ADS)

    Peters, Nial; Hoffmann, Alex; Barnie, Talfan; Herzog, Michael; Oppenheimer, Clive

    2015-07-01

    SO2 cameras are rapidly gaining popularity as a tool for monitoring SO2 emissions from volcanoes. Several different SO2 camera systems have been developed with varying patterns of image acquisition in space, time and wavelength. Despite this diversity, there are two steps common to the workflows of most of these systems; aligning images of different wavelengths to calculate apparent absorbance and estimating plume transport speeds, both of which can be achieved using motion estimation algorithms. Here we present two such algorithms, a Dual Tree Complex Wavelet Transform-based algorithm and the Farnebäck Optical Flow algorithm. We assess their accuracy using a synthetic dataset created using the numeric cloud-resolving model ATHAM, and then apply them to real world data from Villarrica volcano. Both algorithms are found to perform well and the ATHAM simulations offer useful datasets for benchmarking and validating future algorithms.

  13. An improved multi-objective evolutionary memetic algorithm based on multi-population and its application

    NASA Astrophysics Data System (ADS)

    Xiao, Zhongliang

    2012-04-01

    In this paper, we set up a mathematical model to solve the problem of airport ground services. In this model, we set objective function of cost and time, and the purpose is making it minimized. Base on the analysis of scheduling characteristic, we use the multi-population co-evolutionary Memetic algorithm (MAMC) which is with the elitist strategy to realize the model. From the result we can see that our algorithm is better than the genetic algorithm in this problem and we can see that our algorithm is convergence. So we can summarize that it can be a better optimization to airport ground services problem.

  14. Improving chemical mapping algorithm and visualization in full-field hard x-ray spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong

    2013-12-01

    X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.

  15. Improvement of phase diversity algorithm for non-common path calibration in extreme AO context

    NASA Astrophysics Data System (ADS)

    Robert, Clélia; Fusco, Thierry; Sauvage, Jean-François; Mugnier, Laurent

    2008-07-01

    Exoplanet direct imaging with a ground-based telescope needs a very high performance adaptive optics (AO) system, so-called eXtreme AO (XAO), a coronagraph device, and a smart imaging process. One limitation of AO system in operation remains the Non Common Path Aberrations (NCPA). To achieve the ultimate XAO performance, these aberrations have to be measured with a dedicated wavefront sensor placed in the imaging camera focal plane, and then pre-compensated using the AO closed loop process. In any events, the pre-compensation should minimize the aberrations at the coronagraph focal plane mask. An efficient way for the NCPA measurement is the phase diversity technique. A pixel-wise approach is well-suited to estimate NCPA on large pupils and subsequent projection on the deformable mirror with Cartesian geometry. However it calls for a careful regularization for optimal results. The weight of the regularization is written in close-form for un-supervised tuning. The accuracy of NCPA pre-compensation is below 8 nm for a wide range of conditions. Point-by-point phase estimation improves the accuracy of the Phase Diversity method. The algorithm is validated in simulation and experimentally. It will be implemented in SAXO, the XAO system of the second generation VLT instrument: SPHERE.

  16. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization

    PubMed Central

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate. PMID:26064085

  17. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559

  18. IMPROVEMENTS TO THE TIME STEPPING ALGORITHM OF RELAP5-3D

    SciTech Connect

    Cumberland, R.; Mesina, G.

    2009-01-01

    The RELAP5-3D time step method is used to perform thermo-hydraulic and neutronic simulations of nuclear reactors and other devices. It discretizes time and space by numerically solving several differential equations. Previously, time step size was controlled by halving or doubling the size of a previous time step. This process caused the code to run slower than it potentially could. In this research project, the RELAP5-3D time step method was modifi ed to allow a new method of changing time steps to improve execution speed and to control error. The new RELAP5-3D time step method being studied involves making the time step proportional to the material courant limit (MCL), while insuring that the time step does not increase by more than a factor of two between advancements. As before, if a step fails or mass error is excessive, the time step is cut in half. To examine performance of the new method, a measure of run time and a measure of error were plotted against a changing MCL proportionality constant (m) in seven test cases. The removal of the upper time step limit produced a small increase in error, but a large decrease in execution time. The best value of m was found to be 0.9. The new algorithm is capable of producing a signifi cant increase in execution speed, with a relatively small increase in mass error. The improvements made are now under consideration for inclusion as a special option in the RELAP5-3D production code.

  19. Improvement of Image Quality and Diagnostic Performance by an Innovative Motion-Correction Algorithm for Prospectively ECG Triggered Coronary CT Angiography

    PubMed Central

    Lu, Bin; Yan, Hong-Bing; Mu, Chao-Wei; Gao, Yang; Hou, Zhi-Hui; Wang, Zhi-Qiang; Liu, Kun; Parinella, Ashley H.; Leipsic, Jonathon A.

    2015-01-01

    Objective To investigate the effect of a novel motion-correction algorithm (Snap-short Freeze, SSF) on image quality and diagnostic accuracy in patients undergoing prospectively ECG-triggered CCTA without administering rate-lowering medications. Materials and Methods Forty-six consecutive patients suspected of CAD prospectively underwent CCTA using prospective ECG-triggering without rate control and invasive coronary angiography (ICA). Image quality, interpretability, and diagnostic performance of SSF were compared with conventional multisegment reconstruction without SSF, using ICA as the reference standard. Results All subjects (35 men, 57.6 ± 8.9 years) successfully underwent ICA and CCTA. Mean heart rate was 68.8±8.4 (range: 50–88 beats/min) beats/min without rate controlling medications during CT scanning. Overall median image quality score (graded 1–4) was significantly increased from 3.0 to 4.0 by the new algorithm in comparison to conventional reconstruction. Overall interpretability was significantly improved, with a significant reduction in the number of non-diagnostic segments (690 of 694, 99.4% vs 659 of 694, 94.9%; P<0.001). However, only the right coronary artery (RCA) showed a statistically significant difference (45 of 46, 97.8% vs 35 of 46, 76.1%; P = 0.004) on a per-vessel basis in this regard. Diagnostic accuracy for detecting ≥50% stenosis was improved using the motion-correction algorithm on per-vessel [96.2% (177/184) vs 87.0% (160/184); P = 0.002] and per-segment [96.1% (667/694) vs 86.6% (601/694); P <0.001] levels, but there was not a statistically significant improvement on a per-patient level [97.8 (45/46) vs 89.1 (41/46); P = 0.203]. By artery analysis, diagnostic accuracy was improved only for the RCA [97.8% (45/46) vs 78.3% (36/46); P = 0.007]. Conclusion The intracycle motion correction algorithm significantly improved image quality and diagnostic interpretability in patients undergoing CCTA with prospective ECG triggering and

  20. Interobserver reliability to interpret intrapartum electronic fetal heart rate monitoring: Does a standardized algorithm improve agreement among clinicians?

    PubMed

    Uccella, S; Cromi, A; Colombo, G F; Bogani, G; Casarin, J; Agosti, M; Ghezzi, F

    2015-04-01

    Our aim was to investigate the accuracy in predicting intrapartum fetal acidaemia and the interobserver reproducibility of a mathematical algorithm for the interpretation of electronic fetal heart rate (FHR) monitoring throughout labour. Eight physicians (blinded to the clinical outcomes of the deliveries) evaluated four randomly selected intrapartum FHR tracings by common visual interpretation, trying to predict umbilical artery base excess at birth. They subsequently were asked to re-evaluate the tracings using a mathematical algorithm for FHR tracing interpretation. Common visual interpretation allowed a correct estimation of the umbilical artery base excess in 34.4% of cases, with a poor interobserver reproducibility (Kappa correlation coefficient = 0.24). After implementation of the algorithm, the proportion of correct estimates significantly increased to 90.6% (p < 0.001), with excellent inter-clinician agreement (κ: 0.85). To conclude, incorporation of a standardised algorithm reduces the interobserver variability and allows a better estimation of fetal acidaemia at birth. PMID:25254299

  1. A crack extraction algorithm based on improved median filter and Hessian matrix

    NASA Astrophysics Data System (ADS)

    Zhao, Yafeng; Zhao, Qiancheng; He, Yongbiao; Lu, Guofeng

    2016-01-01

    Aiming at the problems of existing crack extraction algorithms which are difficult to achieve fast and accurate crack extraction of image, an algorithm of crack detection based on Median Filter and Hessian Matrix is proposed. Firstly, median filter of crack gray image in 4 directions, Level, 45 degree, vertical and -45 degree, is conducted, by which noises are removed and roughly extracted crack is obtained. Then according to the Hessian matrix feature of extracting image linear feature, convolution of Differential operation of the Hessian matrix is adopted, and crack is further extracted through eigenvalues response and changing standard deviation of Gaussian function. The proposed algorithm validity is verified by comparison with other crack extraction algorithm. The results show that this algorithm has obvious accuracy rate in crack extraction.

  2. Significant Improvements to LOGIST.

    ERIC Educational Resources Information Center

    Wingersky, Marilyn S.

    The computer program LOGIST (Wingersky, Patrick, and Lord, 1988) estimates the item parameters and the examinee's abilities for Birnbaum's three-parameter logistic item response theory model using Newton's method for solving the joint maximum likelihood equations. In 1989, Martha Stocking discovered a problem with this procedure in that when the…

  3. An Improved Source-Scanning Algorithm for Locating Earthquake Clusters or Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Kao, H.; Hsu, S.

    2010-12-01

    The Source-scanning Algorithm (SSA) was originally introduced in 2004 to locate non-volcanic tremors. Its application was later expanded to the identification of earthquake rupture planes and the near-real-time detection and monitoring of landslides and mud/debris flows. In this study, we further improve SSA for the purpose of locating earthquake clusters or aftershock sequences when only a limited number of waveform observations are available. The main improvements include the application of a ground motion analyzer to separate P and S waves, the automatic determination of resolution based on the grid size and time step of the scanning process, and a modified brightness function to utilize constraints from multiple phases. Specifically, the improved SSA (named as ISSA) addresses two major issues related to locating earthquake clusters/aftershocks. The first one is the massive amount of both time and labour to locate a large number of seismic events manually. And the second one is to efficiently and correctly identify the same phase across the entire recording array when multiple events occur closely in time and space. To test the robustness of ISSA, we generate synthetic waveforms consisting of 3 separated events such that individual P and S phases arrive at different stations in different order, thus making correct phase picking nearly impossible. Using these very complicated waveforms as the input, the ISSA scans all model space for possible combination of time and location for the existence of seismic sources. The scanning results successfully associate various phases from each event at all stations, and correctly recover the input. To further demonstrate the advantage of ISSA, we apply it to the waveform data collected by a temporary OBS array for the aftershock sequence of an offshore earthquake southwest of Taiwan. The overall signal-to-noise ratio is inadequate for locating small events; and the precise arrival times of P and S phases are difficult to

  4. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  5. Improved Power System Stability Using Backtracking Search Algorithm for Coordination Design of PSS and TCSC Damping Controller.

    PubMed

    Niamul Islam, Naz; Hannan, M A; Mohamed, Azah; Shareef, Hussain

    2016-01-01

    Power system oscillation is a serious threat to the stability of multimachine power systems. The coordinated control of power system stabilizers (PSS) and thyristor-controlled series compensation (TCSC) damping controllers is a commonly used technique to provide the required damping over different modes of growing oscillations. However, their coordinated design is a complex multimodal optimization problem that is very hard to solve using traditional tuning techniques. In addition, several limitations of traditionally used techniques prevent the optimum design of coordinated controllers. In this paper, an alternate technique for robust damping over oscillation is presented using backtracking search algorithm (BSA). A 5-area 16-machine benchmark power system is considered to evaluate the design efficiency. The complete design process is conducted in a linear time-invariant (LTI) model of a power system. It includes the design formulation into a multi-objective function from the system eigenvalues. Later on, nonlinear time-domain simulations are used to compare the damping performances for different local and inter-area modes of power system oscillations. The performance of the BSA technique is compared against that of the popular particle swarm optimization (PSO) for coordinated design efficiency. Damping performances using different design techniques are compared in term of settling time and overshoot of oscillations. The results obtained verify that the BSA-based design improves the system stability significantly. The stability of the multimachine power system is improved by up to 74.47% and 79.93% for an inter-area mode and a local mode of oscillation, respectively. Thus, the proposed technique for coordinated design has great potential to improve power system stability and to maintain its secure operation. PMID:26745265

  6. Improved Power System Stability Using Backtracking Search Algorithm for Coordination Design of PSS and TCSC Damping Controller

    PubMed Central

    Niamul Islam, Naz; Hannan, M. A.; Mohamed, Azah; Shareef, Hussain

    2016-01-01

    Power system oscillation is a serious threat to the stability of multimachine power systems. The coordinated control of power system stabilizers (PSS) and thyristor-controlled series compensation (TCSC) damping controllers is a commonly used technique to provide the required damping over different modes of growing oscillations. However, their coordinated design is a complex multimodal optimization problem that is very hard to solve using traditional tuning techniques. In addition, several limitations of traditionally used techniques prevent the optimum design of coordinated controllers. In this paper, an alternate technique for robust damping over oscillation is presented using backtracking search algorithm (BSA). A 5-area 16-machine benchmark power system is considered to evaluate the design efficiency. The complete design process is conducted in a linear time-invariant (LTI) model of a power system. It includes the design formulation into a multi-objective function from the system eigenvalues. Later on, nonlinear time-domain simulations are used to compare the damping performances for different local and inter-area modes of power system oscillations. The performance of the BSA technique is compared against that of the popular particle swarm optimization (PSO) for coordinated design efficiency. Damping performances using different design techniques are compared in term of settling time and overshoot of oscillations. The results obtained verify that the BSA-based design improves the system stability significantly. The stability of the multimachine power system is improved by up to 74.47% and 79.93% for an inter-area mode and a local mode of oscillation, respectively. Thus, the proposed technique for coordinated design has great potential to improve power system stability and to maintain its secure operation. PMID:26745265

  7. Vibration control of a flexible clamped-clamped plate based on an improved FULMS algorithm and laser displacement measurement

    NASA Astrophysics Data System (ADS)

    Xie, Lingbo; Qiu, Zhi-cheng; Zhang, Xian-min

    2016-06-01

    This paper presents a novel active resonant vibration control experiment of a flexible clamped-clamped plate using an improved filtered-U least mean square (FULMS) algorithm and laser displacement measurement. Different from the widely used PZT sensors or acceleration transducers, the vibration of the flexible clamped-clamped plate is measured by a non-contact laser displacement measurement sensor with higher measurement accuracy and without additional load to the plate. The conventional FULMS algorithm often uses fixed step size and needs reference signal related to the external disturbance signal. However, the fixed step size method cannot obtain a fast convergence speed and it will result in a low residual error. Thus, a variable step size method is investigated. In addition, it is difficult to extract reference signal related to the vibration source directly in the practical application. Therefore, it is practically useful that a reference signal is constructed by both the controller parameters and the vibration residual signal. The experimental results demonstrate that the improved FULMS algorithm has better vibration control effect than the proportional derivative (PD) feedback control algorithm and the fixed step-size control algorithm.

  8. Natalizumab Significantly Improves Cognitive Impairment over Three Years in MS: Pattern of Disability Progression and Preliminary MRI Findings

    PubMed Central

    Mattioli, Flavia; Stampatori, Chiara; Bellomi, Fabio; Scarpazza, Cristina; Capra, Ruggero

    2015-01-01

    Previous studies reported that Multiple Sclerosis (MS) patients treated with natalizumab for one or two years exhibit a significant reduction in relapse rate and in cognitive impairment, but the long term effects on cognitive performance are unknown. This study aimed to evaluate the effects of natalizumab on cognitive impairment in a cohort of 24 consecutive patients with relapsing remitting MS treated for 3 years. The neuropsychological tests, as well as relapse number and EDSS, were assessed at baseline and yearly for three years. The impact on cortical atrophy was also considered in a subgroup of them, and are thus to be considered as preliminary. Results showed a significant reduction in the number of impaired neuropsychological tests after three years, a significant decrease in annualized relapse rate at each time points compared to baseline and a stable EDSS. In the neuropsychological assessment, a significant improvement in memory, attention and executive function test scores was detected. Preliminary MRI data show that, while GM volume did not change at 3 years, a significantly greater parahippocampal and prefrontal gray matter density was noticed, the former correlating with neuropsychological improvement in a memory test. This study showed that therapy with Natalizumab is helpful in improving cognitive performance, and is likely to have a protective role on grey matter, over a three years follow-up. PMID:26148120

  9. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    NASA Astrophysics Data System (ADS)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  10. An improved distributed routing algorithm for Benes based optical NoC

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Gu, Huaxi; Yang, Yintang

    2010-08-01

    Integrated optical interconnect is believed to be one of the main technologies to replace electrical wires. Optical Network-on-Chip (ONoC) has attracted more attentions nowadays. Benes topology is a good choice for ONoC for its rearrangeable non-blocking character, multistage feature and easy scalability. Routing algorithm plays an important role in determining the performance of ONoC. But traditional routing algorithms for Benes network are not suitable for ONoC communication, we developed a new distributed routing algorithm for Benes ONoC in this paper. Our algorithm selected the routing path dynamically according to network condition and enables more path choices for the message traveling in the network. We used OPNET to evaluate the performance of our routing algorithm and also compared it with a well-known bit-controlled routing algorithm. ETE delay and throughput were showed under different packet length and network sizes. Simulation results show that our routing algorithm can provide better performance for ONoC.

  11. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and

  12. Improved blood velocity measurements with a hybrid image filtering and iterative Radon transform algorithm

    PubMed Central

    Chhatbar, Pratik Y.; Kara, Prakash

    2013-01-01

    Neural activity leads to hemodynamic changes which can be detected by functional magnetic resonance imaging (fMRI). The determination of blood flow changes in individual vessels is an important aspect of understanding these hemodynamic signals. Blood flow can be calculated from the measurements of vessel diameter and blood velocity. When using line-scan imaging, the movement of blood in the vessel leads to streaks in space-time images, where streak angle is a function of the blood velocity. A variety of methods have been proposed to determine blood velocity from such space-time image sequences. Of these, the Radon transform is relatively easy to implement and has fast data processing. However, the precision of the velocity measurements is dependent on the number of Radon transforms performed, which creates a trade-off between the processing speed and measurement precision. In addition, factors like image contrast, imaging depth, image acquisition speed, and movement artifacts especially in large mammals, can potentially lead to data acquisition that results in erroneous velocity measurements. Here we show that pre-processing the data with a Sobel filter and iterative application of Radon transforms address these issues and provide more accurate blood velocity measurements. Improved signal quality of the image as a result of Sobel filtering increases the accuracy and the iterative Radon transform offers both increased precision and an order of magnitude faster implementation of velocity measurements. This algorithm does not use a priori knowledge of angle information and therefore is sensitive to sudden changes in blood flow. It can be applied on any set of space-time images with red blood cell (RBC) streaks, commonly acquired through line-scan imaging or reconstructed from full-frame, time-lapse images of the vasculature. PMID:23807877

  13. AsteroidZoo: A New Zooniverse project to detect asteroids and improve asteroid detection algorithms

    NASA Astrophysics Data System (ADS)

    Beasley, M.; Lewicki, C. A.; Smith, A.; Lintott, C.; Christensen, E.

    2013-12-01

    We present a new citizen science project: AsteroidZoo. A collaboration between Planetary Resources, Inc., the Zooniverse Team, and the Catalina Sky Survey, we will bring the science of asteroid identification to the citizen scientist. Volunteer astronomers have proved to be a critical asset in identification and characterization of asteroids, especially potentially hazardous objects. These contributions, to date, have required that the volunteer possess a moderate telescope and the ability and willingness to be responsive to observing requests. Our new project will use data collected by the Catalina Sky Survey (CSS), currently the most productive asteroid survey, to be used by anyone with sufficient interest and an internet connection. As previous work by the Zooniverse has demonstrated, the capability of the citizen scientist is superb at classification of objects. Even the best automated searches require human intervention to identify new objects. These searches are optimized to reduce false positive rates and to prevent a single operator from being overloaded with requests. With access to the large number of people in Zooniverse, we will be able to avoid that problem and instead work to produce a complete detection list. Each frame from CSS will be searched in detail, generating a large number of new detections. We will be able to evaluate the completeness of the CSS data set and potentially provide improvements to the automated pipeline. The data corpus produced by AsteroidZoo will be used as a training environment for machine learning challenges in the future. Our goals include a more complete asteroid detection algorithm and a minimum computation program that skims the cream of the data suitable for implemention on small spacecraft. Our goal is to have the site become live in the Fall 2013.

  14. Improvements on the minimax algorithm for the Laplace transformation of orbital energy denominators

    NASA Astrophysics Data System (ADS)

    Helmich-Paris, Benjamin; Visscher, Lucas

    2016-09-01

    We present a robust and non-heuristic algorithm that finds all extremum points of the error distribution function of numerically Laplace-transformed orbital energy denominators. The extremum point search is one of the two key steps for finding the minimax approximation. If pre-tabulation of initial guesses is supposed to be avoided, strategies for a sufficiently robust algorithm have not been discussed so far. We compare our non-heuristic approach with a bracketing and bisection algorithm and demonstrate that 3 times less function evaluations are required altogether when applying it to typical non-relativistic and relativistic quantum chemical systems.

  15. Improved motion contrast and processing efficiency in OCT angiography using complex-correlation algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Li; Li, Pei; Pan, Cong; Liao, Rujia; Cheng, Yuxuan; Hu, Weiwei; Chen, Zhong; Ding, Zhihua; Li, Peng

    2016-02-01

    The complex-based OCT angiography (Angio-OCT) offers high motion contrast by combining both the intensity and phase information. However, due to involuntary bulk tissue motions, complex-valued OCT raw data are processed sequentially with different algorithms for correcting bulk image shifts (BISs), compensating global phase fluctuations (GPFs) and extracting flow signals. Such a complicated procedure results in massive computational load. To mitigate such a problem, in this work, we present an inter-frame complex-correlation (CC) algorithm. The CC algorithm is suitable for parallel processing of both flow signal extraction and BIS correction, and it does not need GPF compensation. This method provides high processing efficiency and shows superiority in motion contrast. The feasibility and performance of the proposed CC algorithm is demonstrated using both flow phantom and live animal experiments.

  16. Improvements on particle swarm optimization algorithm for velocity calibration in microseismic monitoring

    NASA Astrophysics Data System (ADS)

    Yang, Yue; Wen, Jian; Chen, Xiaofei

    2015-07-01

    In this paper, we apply particle swarm optimization (PSO), an artificial intelligence technique, to velocity calibration in microseismic monitoring. We ran simulations with four 1-D layered velocity models and three different initial model ranges. The results using the basic PSO algorithm were reliable and accurate for simple models, but unsuccessful for complex models. We propose the staged shrinkage strategy (SSS) for the PSO algorithm. The SSS-PSO algorithm produced robust inversion results and had a fast convergence rate. We investigated the effects of PSO's velocity clamping factor in terms of the algorithm reliability and computational efficiency. The velocity clamping factor had little impact on the reliability and efficiency of basic PSO, whereas it had a large effect on the efficiency of SSS-PSO. Reassuringly, SSS-PSO exhibits marginal reliability fluctuations, which suggests that it can be confidently implemented.

  17. A new algorithm for evaluating 3D curvature and curvature gradient for improved fracture detection

    NASA Astrophysics Data System (ADS)

    Di, Haibin; Gao, Dengliang

    2014-09-01

    In 3D seismic interpretation, both curvature and curvature gradient are useful seismic attributes for structure characterization and fault detection in the subsurface. However, the existing algorithms are computationally intensive and limited by the lateral resolution for steeply-dipping formations. This study presents new and robust volume-based algorithms that evaluate both curvature and curvature gradient attributes more accurately and effectively. The algorithms first instantaneously fit a local surface to seismic data and then compute attributes using the spatial derivatives of the built surface. Specifically, the curvature algorithm constructs a quadratic surface by using a rectangle 9-node grid cell, whereas the curvature gradient algorithm builds a cubic surface by using a diamond 13-node grid cell. A dip-steering approach based on 3D complex seismic trace analysis is implemented to enhance the accuracy of surface construction and to reduce computational time. Applications to two 3D seismic surveys demonstrate the accuracy and efficiency of the new curvature and curvature gradient algorithms for characterizing faults and fractures in fractured reservoirs.

  18. A novel clinical decision support system using improved adaptive genetic algorithm for the assessment of fetal well-being.

    PubMed

    Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin

    2015-01-01

    A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm. PMID:25793009

  19. A Novel Clinical Decision Support System Using Improved Adaptive Genetic Algorithm for the Assessment of Fetal Well-Being

    PubMed Central

    Jambek, Asral Bahari; Neoh, Siew-Chin

    2015-01-01

    A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm. PMID:25793009

  20. Asymptotic analysis of online algorithms and improved scheme for the flow shop scheduling problem with release dates

    NASA Astrophysics Data System (ADS)

    Bai, Danyu

    2015-08-01

    This paper discusses the flow shop scheduling problem to minimise the total quadratic completion time (TQCT) with release dates in offline and online environments. For this NP-hard problem, the investigation is focused on the performance of two online algorithms based on the Shortest Processing Time among Available jobs rule. Theoretical results indicate the asymptotic optimality of the algorithms as the problem scale is sufficiently large. To further enhance the quality of the original solutions, the improvement scheme is provided for these algorithms. A new lower bound with performance guarantee is provided, and computational experiments show the effectiveness of these heuristics. Moreover, several results of the single-machine TQCT problem with release dates are also obtained for the deduction of the main theorem.