Science.gov

Sample records for algorithm substantially improves

  1. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  2. Co-Inheritance Analysis within the Domains of Life Substantially Improves Network Inference by Phylogenetic Profiling

    PubMed Central

    Shin, Junha; Lee, Insuk

    2015-01-01

    Phylogenetic profiling, a network inference method based on gene inheritance profiles, has been widely used to construct functional gene networks in microbes. However, its utility for network inference in higher eukaryotes has been limited. An improved algorithm with an in-depth understanding of pathway evolution may overcome this limitation. In this study, we investigated the effects of taxonomic structures on co-inheritance analysis using 2,144 reference species in four query species: Escherichia coli, Saccharomyces cerevisiae, Arabidopsis thaliana, and Homo sapiens. We observed three clusters of reference species based on a principal component analysis of the phylogenetic profiles, which correspond to the three domains of life—Archaea, Bacteria, and Eukaryota—suggesting that pathways inherit primarily within specific domains or lower-ranked taxonomic groups during speciation. Hence, the co-inheritance pattern within a taxonomic group may be eroded by confounding inheritance patterns from irrelevant taxonomic groups. We demonstrated that co-inheritance analysis within domains substantially improved network inference not only in microbe species but also in the higher eukaryotes, including humans. Although we observed two sub-domain clusters of reference species within Eukaryota, co-inheritance analysis within these sub-domain taxonomic groups only marginally improved network inference. Therefore, we conclude that co-inheritance analysis within domains is the optimal approach to network inference with the given reference species. The construction of a series of human gene networks with increasing sample sizes of the reference species for each domain revealed that the size of the high-accuracy networks increased as additional reference species genomes were included, suggesting that within-domain co-inheritance analysis will continue to expand human gene networks as genomes of additional species are sequenced. Taken together, we propose that co

  3. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  4. Improved Heat-Stress Algorithm

    NASA Technical Reports Server (NTRS)

    Teets, Edward H., Jr.; Fehn, Steven

    2007-01-01

    NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.

  5. An improved dehazing algorithm of aerial high-definition image

    NASA Astrophysics Data System (ADS)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  6. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  7. Substantial Improvement of Short Wavelength Response in n-SiNW/PEDOT:PSS Solar Cell

    NASA Astrophysics Data System (ADS)

    Ge, Zhaoyun; Xu, Ling; Cao, Yunqing; Wu, Tao; Song, Hucheng; Ma, Zhongyuan; Xu, Jun; Chen, Kunji

    2015-08-01

    We report herein on the effects of silicon nanowire with different morphology on the device performance of n-SiNW/PEDOT:PSS hybrid solar cells. The power conversion efficiency (PCE) and external quantum efficiency (EQE) of the SiNW/PEDOT:PSS hybrid solar cells can be optimized by varying the length of the silicon nanowires. The optimal length of silicon nanowires is 0.23 μm, and the hybrid solar cell with the optimal length has the V oc of 569 mV, J sc of 30.1 mA/cm2, and PCE of 9.3 %. We fabricated more isolated silicon nanowires with the diluted etching solution. And the J sc of the hybrid solar cell with more isolated nanowires has a significant enhancement, from 30.1 to 33.2 mA/cm2. The remarkable EQE in the wavelength region of 300 and 600 nm was also obtained, which are in excess of 80 %. Our work provides a simple method to substantially improve the EQE of hybrid solar cell in the short wavelength region.

  8. Algorithms for improved performance in cryptographic protocols.

    SciTech Connect

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  9. Improving the algorithm of temporal relation propagation

    NASA Astrophysics Data System (ADS)

    Shen, Jifeng; Xu, Dan; Liu, Tongming

    2005-03-01

    In the military Multi Agent System, every agent needs to analyze the temporal relationships among the tasks or combat behaviors, and it"s very important to reflect the battlefield situation in time. The temporal relation among agents is usually very complex, and we model it with interval algebra (IA) network. Therefore an efficient temporal reasoning algorithm is vital in battle MAS model. The core of temporal reasoning is path consistency algorithm, an efficient path consistency algorithm is necessary. In this paper we used the Interval Matrix Calculus (IMC) method to represent the temporal relation, and optimized the path consistency algorithm by improving the efficiency of propagation of temporal relation based on the Allen's path consistency algorithm.

  10. Image enhancement algorithm based on improved lateral inhibition network

    NASA Astrophysics Data System (ADS)

    Yun, Haijiao; Wu, Zhiyong; Wang, Guanjun; Tong, Gang; Yang, Hua

    2016-05-01

    There is often substantial noise and blurred details in the images captured by cameras. To solve this problem, we propose a novel image enhancement algorithm combined with an improved lateral inhibition network. Firstly, we built a mathematical model of a lateral inhibition network in conjunction with biological visual perception; this model helped to realize enhanced contrast and improved edge definition in images. Secondly, we proposed that the adaptive lateral inhibition coefficient adhere to an exponential distribution thus making the model more flexible and more universal. Finally, we added median filtering and a compensation measure factor to build the framework with high pass filtering functionality thus eliminating image noise and improving edge contrast, addressing problems with blurred image edges. Our experimental results show that our algorithm is able to eliminate noise and the blurring phenomena, and enhance the details of visible and infrared images.

  11. A Statistical Approach to Identifying Schools Demonstrating Substantial Improvement in Student Learning

    ERIC Educational Resources Information Center

    Meyers, Coby; Lindsay, Jim; Condon, Chris; Wan, Yinmei

    2012-01-01

    The rising tide behind the school turnaround movement is significant, as national education leaders continue to call for the rapid improvement of the nation's lowest-performing schools. To date, little work has been done to identify schools that are drastically improving their performance. Using publically available school-level student…

  12. Improving Search Algorithms by Using Intelligent Coordinates

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  13. An on-line template improvement algorithm

    NASA Astrophysics Data System (ADS)

    Yin, Yilong; Zhao, Bo; Yang, Xiukun

    2005-03-01

    In automatic fingerprint identification system, incomplete or rigid template may lead to false rejection and false matching. So, how to improve quality of the template, which is called template improvement, is important to automatic fingerprint identify system. In this paper, we propose a template improve algorithm. Based on the case-based method of machine learning and probability theory, we improve the template by deleting pseudo minutia, restoring lost genuine minutia and updating the information of minutia such as positions and directions. And special fingerprint image database is built for this work. Experimental results on this database indicate that our method is effective and quality of fingerprint template is improved evidently. Accordingly, performance of fingerprint matching is also improved stably along with the increase of using time.

  14. Compression Techniques for Improved Algorithm Computational Performance

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Howell, Patricia A.; Winfree, William P.

    2005-01-01

    Analysis of thermal data requires the processing of large amounts of temporal image data. The processing of the data for quantitative information can be time intensive especially out in the field where large areas are inspected resulting in numerous data sets. By applying a temporal compression technique, improved algorithm performance can be obtained. In this study, analysis techniques are applied to compressed and non-compressed thermal data. A comparison is made based on computational speed and defect signal to noise.

  15. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  16. An improved algorithm for wildfire detection

    NASA Astrophysics Data System (ADS)

    Nakau, K.

    2010-12-01

    Satellite information of wild fire location has strong demands from society. Therefore, Understanding such demands is quite important to consider what to improve the wild fire detection algorithm. Interviews and considerations imply that the most important improvements are geographical resolution of the wildfire product and classification of fire; smoldering or flaming. Discussion with fire service agencies are performed with fire service agencies in Alaska and fire service volunteer groups in Indonesia. Alaska Fire Service (AFS) makes 3D-map overlaid by fire location every morning. Then, this 3D-map is examined by leaders of fire service teams to decide their strategy to fighting against wild fire. Especially, firefighters of both agencies seek the best walk path to approach the fire. Because of mountainous landscape, geospatial resolution is quite important for them. For example, walking in bush for 1km, as same as one pixel of fire product, is very tough for firefighters. Also, in case of remote wild fire, fire service agencies utilize satellite information to decide when to have a flight observation to confirm the status; expanding, flaming, smoldering or out. Therefore, it is also quite important to provide the classification of fire; flaming or smoldering. Not only the aspect of disaster management, wildfire emits huge amount of carbon into atmosphere as much as one quarter to one half of CO2 by fuel combustion (IPCC AR4). Reduction of the CO2 emission by human caused wildfire is important. To estimate carbon emission from wildfire, special resolution is quite important. To improve sensitivity of wild fire detection, author adopts radiance based wildfire detection. Different from the existing brightness temperature approach, we can easily consider reflectance of background land coverage. Especially for GCOM-C1/SGLI, band to detect fire with 250m resolution is 1.6μm wavelength. In this band, we have much more sunlight reflection. Therefore, we need to

  17. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    Thompson, Robert E.

    2001-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Sounding." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  18. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    Thompson, Robert Earl; McHugh, Martin J.; Gordley, Larry L.; Hervig, Mark E.; Russell, James M., III; Douglass, Anne (Technical Monitor)

    2001-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth Upper Atmospheric Research Satellite (UARS) Science Investigator Program entitled 'HALOE Algorithm Improvements for Upper Tropospheric Sounding.' The goal of this effort is to develop and implement major inversion and processing improvements that will extend Halogen Occultation Experiment (HALOE) measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  19. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    McHugh, Martin J.; Gordley, Larry L.; Russell, James M., III; Hervig, Mark E.

    1999-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Soundings." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first-year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multi-channel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  20. HALOE Algorithm Improvements for Upper Tropospheric Soundings

    NASA Technical Reports Server (NTRS)

    Thompson, Robert E.; Douglass, Anne (Technical Monitor)

    2000-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Sounding." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  1. MLEM algorithm adaptation for improved SPECT scintimammography

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Feiglin, David H.; Lee, Wei; Kunniyur, Vikram R.; Gangal, Kedar R.; Coman, Ioana L.; Lipson, Edward D.; Karczewski, Deborah A.; Thomas, F. Deaver

    2005-04-01

    Standard MLEM and OSEM algorithms used in SPECT Tc-99m sestamibi scintimammography produce hot-spot artifacts (HSA) at the image support peripheries. We investigated a suitable adaptation of MLEM and OSEM algorithms needed to reduce HSA. Patients with suspicious breast lesions were administered 10 mCi of Tc-99m sestamibi and SPECT scans were acquired for patients in prone position with uncompressed breasts. In addition, to simulate breast lesions, some patients were imaged with a number of breast skin markers each containing 1 mCi of Tc-99m. In order to reduce HSA in reconstruction, we removed from the backprojection step the rays that traverse the periphery of the support region on the way to a detector bin, when their path length through this region was shorter than some critical length. Such very short paths result in a very low projection counts contributed to the detector bin, and consequently to overestimation of the activity in the peripheral voxels in the backprojection step-thus creating HSA. We analyzed the breast-lesion contrast and suppression of HSA in the images reconstructed using standard and modified MLEM and OSEM algorithms vs. critical path length (CPL). For CPL >= 0.01 pixel size, we observed improved breast-lesion contrast and lower noise in the reconstructed images, and a very significant reduction of HSA in the maximum intensity projection (MIP) images.

  2. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    algorithms by selecting ranges of the argument omega in which the performance is the fastest. Reasons for the new version: Some of the theoretical models describing electron transport in condensed matter need a source of the Chandrasekhar H function values with an accuracy of at least 10 decimal places. Additionally, calculations of this function should be as fast as possible since frequent calls to a subroutine providing this function are made (e.g., numerical evaluation of a double integral with a complicated integrand containing the H function). Both conditions were satisfied in the algorithm previously published [1]. However, it has been found that a proper selection of the quadrature in an integral representation of the Chandrasekhar function may considerably decrease the running time. By suitable selection of the number of abscissas in Gauss-Legendre quadrature, the execution time was decreased by a factor of more than 20. Simultaneously, the accuracy of results has not been affected. Summary of revisions: (1) As in previous work [1], two integral representations of the Chandrasekhar function, H(x,omega), were considered: the expression published by Dudarev and Whelan [2] and the expression published by Davidović et al. [3]. The algorithms implementing these representations were designated A and B, respectively. All integrals in these implementations were previously calculated using Romberg quadrature. It has been found, however, that the use of Gauss-Legendre quadrature considerably improved the performance of both algorithms. Two conditions have to be satisfied. (i) The number of abscissas, N, has to be rather large, and (ii) the abscissas and corresponding weights should be determined with accuracy as high as possible. The abscissas and weights are available for N=16, 20, 24, 32, 40, 48, 64, 80, and 96 with accuracy of 20 decimal places [4], and all these values were introduced into a new procedure GAUSS replacing procedure ROMBERG. Due to the fact that the

  3. Improved Algorithms Speed It Up for Codes

    SciTech Connect

    Hazi, A

    2005-09-20

    Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leader for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.

  4. Improving search algorithms by using intelligent coordinates

    NASA Astrophysics Data System (ADS)

    Wolpert, David; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent η is self-interested; it sets its variable to maximize its own function gη. Three factors govern such a distributed algorithm’s performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit all three factors by modifying a search algorithm’s exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based “player” engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  5. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  6. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  7. A Multistrategy Optimization Improved Artificial Bee Colony Algorithm

    PubMed Central

    Liu, Wen

    2014-01-01

    Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924

  8. Substantial Improvements in Performance Indicators Achieved in a Peripheral Blood Mononuclear Cell Cryopreservation Quality Assurance Program Using Single Donor Samples▿

    PubMed Central

    Dyer, Wayne B.; Pett, Sarah L.; Sullivan, John S.; Emery, Sean; Cooper, David A.; Kelleher, Anthony D.; Lloyd, Andrew; Lewin, Sharon R.

    2007-01-01

    Storage of high-quality cryopreserved peripheral blood mononuclear cells (PBMC) is often a requirement for multicenter clinical trials and requires a reproducibly high standard of practice. A quality assurance program (QAP) was established to assess an Australia-wide network of laboratories in the provision of high-quality PBMC (determined by yield, viability, and function), using blood taken from single donors (human immunodeficiency virus [HIV] positive and HIV negative) and shipped to each site for preparation and cryopreservation of PBMC. The aim of the QAP was to provide laboratory accreditation for participation in clinical trials and cohort studies which require preparation and cryopreservation of PBMC and to assist all laboratories to prepare PBMC with a viability of >80% and yield of >50% following thawing. Many laboratories failed to reach this standard on the initial QAP round. Interventions to improve performance included telephone interviews with the staff at each laboratory, two annual wet workshops, and direct access to a senior scientist to discuss performance following each QAP round. Performance improved substantially in the majority of sites that initially failed the QAP (P = 0.002 and P = 0.001 for viability and yield, respectively). In a minority of laboratories, there was no improvement (n = 2), while a high standard was retained at the laboratories that commenced with adequate performance (n = 3). These findings demonstrate that simple interventions and monitoring of PBMC preparation and cryopreservation from multiple laboratories can significantly improve performance and contribute to maintenance of a network of laboratories accredited for quality PBMC fractionation and cryopreservation. PMID:17050740

  9. Improved Algorithm For Finite-Field Normal-Basis Multipliers

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1989-01-01

    Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.

  10. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  11. Protein kinase inhibitors substantially improve the physical detection of T-cells with peptide-MHC tetramers.

    PubMed

    Lissina, Anna; Ladell, Kristin; Skowera, Ania; Clement, Matthew; Edwards, Emily; Seggewiss, Ruth; van den Berg, Hugo A; Gostick, Emma; Gallagher, Kathleen; Jones, Emma; Melenhorst, J Joseph; Godkin, Andrew J; Peakman, Mark; Price, David A; Sewell, Andrew K; Wooldridge, Linda

    2009-01-01

    Flow cytometry with fluorochrome-conjugated peptide-major histocompatibility complex (pMHC) tetramers has transformed the study of antigen-specific T-cells by enabling their visualization, enumeration, phenotypic characterization and isolation from ex vivo samples. Here, we demonstrate that the reversible protein kinase inhibitor (PKI) dasatinib improves the staining intensity of human (CD8+ and CD4+) and murine T-cells without concomitant increases in background staining. Dasatinib enhances the capture of cognate pMHC tetramers from solution and produces higher intensity staining at lower pMHC concentrations. Furthermore, dasatinib reduces pMHC tetramer-induced cell death and substantially lowers the T-cell receptor (TCR)/pMHC interaction affinity threshold required for cell staining. Accordingly, dasatinib permits the identification of T-cells with very low affinity TCR/pMHC interactions, such as those that typically predominate in tumour-specific responses and autoimmune conditions that are not amenable to detection by current technology.

  12. Grooming of arbitrary traffic using improved genetic algorithms

    NASA Astrophysics Data System (ADS)

    Jiao, Yueguang; Xu, Zhengchun; Zhang, Hanyi

    2004-04-01

    A genetic algorithm is proposed with permutation based chromosome presentation and roulette wheel selection to solve traffic grooming problems in WDM ring network. The parameters of the algorithm are evaluated by calculating of large amount of traffic patterns at different conditions. Four methods were developed to improve the algorithm, which can be used combining with each other. Effects of them on the algorithm are studied via computer simulations. The results show that they can all make the algorithm more powerful to reduce the number of add-drop multiplexers or wavelengths required in a network.

  13. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  14. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  15. Optimization and improvement of FOA corner cube algorithm

    NASA Astrophysics Data System (ADS)

    McClay, Wilbert A., III; Awwal, Abdul A. S.; Burkhart, Scott C.; Candy, James V.

    2004-11-01

    Alignment of laser beams based on video images is a crucial task necessary to automate operation of the 192 beams at the National Ignition Facility (NIF). The final optics assembly (FOA) is the optical element that aligns the beam into the target chamber. This work presents an algorithm for determining the position of a corner cube alignment image in the final optics assembly. The improved algorithm was compared to the existing FOA algorithm on 900 noise-simulated images. While the existing FOA algorithm based on correlation with a synthetic template has a radial standard deviation of 1 pixel, the new algorithm based on classical matched filtering (CMF) and polynomial fit to the correlation peak improves the radial standard deviation performance to less than 0.3 pixels. In the new algorithm the templates are designed from real data stored during a year of actual operation.

  16. Optimization and Improvement of FOA Corner Cube Algorithm

    SciTech Connect

    McClay, W A; Awwal, A S; Burkhart, S C; Candy, J V

    2004-10-01

    Alignment of laser beams based on video images is a crucial task necessary to automate operation of the 192 beams at the National Ignition Facility (NIF). The final optics assembly (FOA) is the optical element that aligns the beam into the target chamber. This work presents an algorithm for determining the position of a corner cube alignment image in the final optics assembly. The improved algorithm was compared to the existing FOA algorithm on 900 noise-simulated images. While the existing FOA algorithm based on correlation with a synthetic template has a radial standard deviation of 1 pixel, the new algorithm based on classical matched filtering (CMF) and polynomial fit to the correlation peak improves the radial standard deviation performance to less than 0.3 pixels. In the new algorithm the templates are designed from real data stored during a year of actual operation.

  17. Improved local linearization algorithm for solving the quaternion equations

    NASA Technical Reports Server (NTRS)

    Yen, K.; Cook, G.

    1980-01-01

    The objective of this paper is to develop a new and more accurate local linearization algorithm for numerically solving sets of linear time-varying differential equations. Of special interest is the application of this algorithm to the quaternion rate equations. The results are compared, both analytically and experimentally, with previous results using local linearization methods. The new algorithm requires approximately one-third more calculations per step than the previously developed local linearization algorithm; however, this disadvantage could be reduced by using parallel implementation. For some cases the new algorithm yields significant improvement in accuracy, even with an enlarged sampling interval. The reverse is true in other cases. The errors depend on the values of angular velocity, angular acceleration, and integration step size. One important result is that for the worst case the new algorithm can guarantee eigenvalues nearer the region of stability than can the previously developed algorithm.

  18. An Improved Neutron Transport Algorithm for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.

    2010-01-01

    Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.

  19. Improved genetic algorithm for fast path planning of USV

    NASA Astrophysics Data System (ADS)

    Cao, Lu

    2015-12-01

    Due to the complex constraints, more uncertain factors and critical real-time demand of path planning for USV(Unmanned Surface Vehicle), an approach of fast path planning based on voronoi diagram and improved Genetic Algorithm is proposed, which makes use of the principle of hierarchical path planning. First the voronoi diagram is utilized to generate the initial paths and then the optimal path is searched by using the improved Genetic Algorithm, which use multiprocessors parallel computing techniques to improve the traditional genetic algorithm. Simulation results verify that the optimal time is greatly reduced and path planning based on voronoi diagram and the improved Genetic Algorithm is more favorable in the real-time operation.

  20. An improvement on OCOG algorithm in satellite radar altimeter

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Jiu, Dehang

    The Offset Center of Gravity (OCOG) algorithm is a new tracking algorithm based on estimate of the pulse amplitude, the pulse width and the true center of area of the pulse. It's obvious that this algorithm is sufficiently robust to permit the altimeter to keep tracking many kinds of surfaces. Having analyzed the performance of this algorithm, it is discovered that the algorithm performs satisfactorily in high SNR environments, but fails in low SNR environments. The cause of the degradation of its performance is studied and it is pointed out that to the Brown return model and the sea ice return model, the performance of the OCOG algorithm can be improved in low SNR environments by using noise gate.

  1. Improvement and implementation for Canny edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  2. An improved localization algorithm based on genetic algorithm in wireless sensor networks.

    PubMed

    Peng, Bo; Li, Lei

    2015-04-01

    Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.

  3. An improved HMM/SVM dynamic hand gesture recognition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Yao, Yuanyuan; Luo, Yuan

    2015-10-01

    In order to improve the recognition rate and stability of dynamic hand gesture recognition, for the low accuracy rate of the classical HMM algorithm in train the B parameter, this paper proposed an improved HMM/SVM dynamic gesture recognition algorithm. In the calculation of the B parameter of HMM model, this paper introduced the SVM algorithm which has the strong ability of classification. Through the sigmoid function converted the state output of the SVM into the probability and treat this probability as the observation state transition probability of the HMM model. After this, it optimized the B parameter of HMM model and improved the recognition rate of the system. At the same time, it also enhanced the accuracy and the real-time performance of the human-computer interaction. Experiments show that this algorithm has a strong robustness under the complex background environment and the varying illumination environment. The average recognition rate increased from 86.4% to 97.55%.

  4. Improvements to previous algorithms to predict gene structure and isoform concentrations using Affymetrix Exon arrays

    PubMed Central

    2010-01-01

    Background Exon arrays provide a way to measure the expression of different isoforms of genes in an organism. Most of the procedures to deal with these arrays are focused on gene expression or on exon expression. Although the only biological analytes that can be properly assigned a concentration are transcripts, there are very few algorithms that focus on them. The reason is that previously developed summarization methods do not work well if applied to transcripts. In addition, gene structure prediction, i.e., the correspondence between probes and novel isoforms, is a field which is still unexplored. Results We have modified and adapted a previous algorithm to take advantage of the special characteristics of the Affymetrix exon arrays. The structure and concentration of transcripts -some of them possibly unknown- in microarray experiments were predicted using this algorithm. Simulations showed that the suggested modifications improved both specificity (SP) and sensitivity (ST) of the predictions. The algorithm was also applied to different real datasets showing its effectiveness and the concordance with PCR validated results. Conclusions The proposed algorithm shows a substantial improvement in the performance over the previous version. This improvement is mainly due to the exploitation of the redundancy of the Affymetrix exon arrays. An R-Package of SPACE with the updated algorithms have been developed and is freely available. PMID:21110835

  5. Unsteady transonic algorithm improvements for realistic aircraft applications

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1987-01-01

    Improvements to a time-accurate approximate factorization (AF) algorithm were implemented for steady and unsteady transonic analysis of realistic aircraft configurations. These algorithm improvements were made to the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code developed at the Langley Research Center. The code permits the aeroelastic analysis of complete aircraft in the flutter critical transonic speed range. The AF algorithm of the CAP-TSD code solves the unsteady transonic small-disturbance equation. The algorithm improvements include: an Engquist-Osher (E-O) type-dependent switch to more accurately and efficiently treat regions of supersonic flow; extension of the E-O switch for second-order spatial accuracy in these regions; nonreflecting far field boundary conditions for more accurate unsteady applications; and several modifications which accelerate convergence to steady-state. Calculations are presented for several configurations including the General Dynamics one-ninth scale F-16C aircraft model to evaluate the algorithm modifications. The modifications have significantly improved the stability of the AF algorithm and hence the reliability of the CAP-TSD code in general.

  6. An improved harmony search algorithm with dynamically varying bandwidth

    NASA Astrophysics Data System (ADS)

    Kalivarapu, J.; Jain, S.; Bag, S.

    2016-07-01

    The present work demonstrates a new variant of the harmony search (HS) algorithm where bandwidth (BW) is one of the deciding factors for the time complexity and the performance of the algorithm. The BW needs to have both explorative and exploitative characteristics. The ideology is to use a large BW to search in the full domain and to adjust the BW dynamically closer to the optimal solution. After trying a series of approaches, a methodology inspired by the functioning of a low-pass filter showed satisfactory results. This approach was implemented in the self-adaptive improved harmony search (SIHS) algorithm and tested on several benchmark functions. Compared to the existing HS algorithm and its variants, SIHS showed better performance on most of the test functions. Thereafter, the algorithm was applied to geometric parameter optimization of a friction stir welding tool.

  7. Improved ant algorithms for software testing cases generation.

    PubMed

    Yang, Shunkun; Man, Tianlong; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to produce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations.

  8. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  9. An improved robust ADMM algorithm for quantum state tomography

    NASA Astrophysics Data System (ADS)

    Li, Kezhi; Zhang, Hui; Kuang, Sen; Meng, Fangfang; Cong, Shuang

    2016-06-01

    In this paper, an improved adaptive weights alternating direction method of multipliers algorithm is developed to implement the optimization scheme for recovering the quantum state in nearly pure states. The proposed approach is superior to many existing methods because it exploits the low-rank property of density matrices, and it can deal with unexpected sparse outliers as well. The numerical experiments are provided to verify our statements by comparing the results to three different optimization algorithms, using both adaptive and fixed weights in the algorithm, in the cases of with and without external noise, respectively. The results indicate that the improved algorithm has better performances in both estimation accuracy and robustness to external noise. The further simulation results show that the successful recovery rate increases when more qubits are estimated, which in fact satisfies the compressive sensing theory and makes the proposed approach more promising.

  10. Improved Inversion Algorithms for Near Surface Characterization

    NASA Astrophysics Data System (ADS)

    Astaneh, Ali Vaziri; Guddati, Murthy N.

    2016-05-01

    Near-surface geophysical imaging is often performed by generating surface waves, and estimating the subsurface properties through inversion, i.e. iteratively matching experimentally observed dispersion curves with predicted curves from a layered half-space model of the subsurface. Key to the effectiveness of inversion is the efficiency and accuracy of computing the dispersion curves and their derivatives. This paper presents improved methodologies for both dispersion curve and derivative computation. First, it is shown that the dispersion curves can be computed more efficiently by combining an unconventional complex-length finite element method (CFEM) to model the finite depth layers, with perfectly matched discrete layers (PMDL) to model the unbounded half-space. Second, based on analytical derivatives for theoretical dispersion curves, an approximate derivative is derived for so-called effective dispersion curve for realistic geophysical surface response data. The new derivative computation has a smoothing effect on the computation of derivatives, in comparison with traditional finite difference (FD) approach, and results in faster convergence. In addition, while the computational cost of FD differentiation is proportional to the number of model parameters, the new differentiation formula has a computational cost that is almost independent of the number of model parameters. At the end, as confirmed by synthetic and real-life imaging examples, the combination of CFEM+PMDL for dispersion calculation and the new differentiation formula results in more accurate estimates of the subsurface characteristics than the traditional methods, at a small fraction of computational effort.

  11. Visualizing and improving the robustness of phase retrieval algorithms

    SciTech Connect

    Tripathi, Ashish; Leyffer, Sven; Munson, Todd; Wild, Stefan M.

    2015-06-01

    Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.

  12. An improved Physarum polycephalum algorithm for the shortest path problem.

    PubMed

    Zhang, Xiaoge; Wang, Qing; Adamatzky, Andrew; Chan, Felix T S; Mahadevan, Sankaran; Deng, Yong

    2014-01-01

    Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960

  13. Improved Ant Colony Clustering Algorithm and Its Performance Study.

    PubMed

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  14. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  15. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    PubMed Central

    Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao

    2013-01-01

    In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments. PMID:23296331

  16. An Improved Physarum polycephalum Algorithm for the Shortest Path Problem

    PubMed Central

    Wang, Qing; Adamatzky, Andrew; Chan, Felix T. S.; Mahadevan, Sankaran

    2014-01-01

    Shortest path is among classical problems of computer science. The problems are solved by hundreds of algorithms, silicon computing architectures and novel substrate, unconventional, computing devices. Acellular slime mould P. polycephalum is originally famous as a computing biological substrate due to its alleged ability to approximate shortest path from its inoculation site to a source of nutrients. Several algorithms were designed based on properties of the slime mould. Many of the Physarum-inspired algorithms suffer from a low converge speed. To accelerate the search of a solution and reduce a number of iterations we combined an original model of Physarum-inspired path solver with a new a parameter, called energy. We undertook a series of computational experiments on approximating shortest paths in networks with different topologies, and number of nodes varying from 15 to 2000. We found that the improved Physarum algorithm matches well with existing Physarum-inspired approaches yet outperforms them in number of iterations executed and a total running time. We also compare our algorithm with other existing algorithms, including the ant colony optimization algorithm and Dijkstra algorithm. PMID:24982960

  17. Adaptive improved natural gradient algorithm for blind source separation.

    PubMed

    Liu, Jian-Qiang; Feng, Da-Zheng; Zhang, Wei-Wei

    2009-03-01

    We propose an adaptive improved natural gradient algorithm for blind separation of independent sources. First, inspired by the well-known backpropagation algorithm, we incorporate a momentum term into the natural gradient learning process to accelerate the convergence rate and improve the stability. Then an estimation function for the adaptation of the separation model is obtained to adaptively control a step-size parameter and a momentum factor. The proposed natural gradient algorithm with variable step-size parameter and variable momentum factor is therefore particularly well suited to blind source separation in a time-varying environment, such as an abruptly changing mixing matrix or signal power. The expected improvement in the convergence speed, stability, and tracking ability of the proposed algorithm is demonstrated by extensive simulation results in both time-invariant and time-varying environments. The ability of the proposed algorithm to separate extremely weak or badly scaled sources is also verified. In addition, simulation results show that the proposed algorithm is suitable for separating mixtures of many sources (e.g., the number of sources is 10) in the complete case.

  18. Kidney segmentation in CT sequences using SKFCM and improved GrowCut algorithm

    PubMed Central

    2015-01-01

    Background Organ segmentation is an important step in computer-aided diagnosis and pathology detection. Accurate kidney segmentation in abdominal computed tomography (CT) sequences is an essential and crucial task for surgical planning and navigation in kidney tumor ablation. However, kidney segmentation in CT is a substantially challenging work because the intensity values of kidney parenchyma are similar to those of adjacent structures. Results In this paper, a coarse-to-fine method was applied to segment kidney from CT images, which consists two stages including rough segmentation and refined segmentation. The rough segmentation is based on a kernel fuzzy C-means algorithm with spatial information (SKFCM) algorithm and the refined segmentation is implemented with improved GrowCut (IGC) algorithm. The SKFCM algorithm introduces a kernel function and spatial constraint into fuzzy c-means clustering (FCM) algorithm. The IGC algorithm makes good use of the continuity of CT sequences in space which can automatically generate the seed labels and improve the efficiency of segmentation. The experimental results performed on the whole dataset of abdominal CT images have shown that the proposed method is accurate and efficient. The method provides a sensitivity of 95.46% with specificity of 99.82% and performs better than other related methods. Conclusions Our method achieves high accuracy in kidney segmentation and considerably reduces the time and labor required for contour delineation. In addition, the method can be expanded to 3D segmentation directly without modification. PMID:26356850

  19. An improved EZBC algorithm based on block bit length

    NASA Astrophysics Data System (ADS)

    Wang, Renlong; Ruan, Shuangchen; Liu, Chengxiang; Wang, Wenda; Zhang, Li

    2011-12-01

    Embedded ZeroBlock Coding and context modeling (EZBC) algorithm has high compression performance. However, it consumes large amounts of memory space because an Amplitude Quadtree of wavelet coefficients and other two link lists would be built during the encoding process. This is one of the big challenges for EZBC to be used in real time or hardware applications. An improved EZBC algorithm based on bit length of coefficients was brought forward in this article. It uses Bit Length Quadtree to complete the coding process and output the context for Arithmetic Coder. It can achieve the same compression performance as EZBC and save more than 75% memory space required in the encoding process. As Bit Length Quadtree can quickly locate the wavelet coefficients and judge their significance, the improved algorithm can dramatically accelerate the encoding speed. These improvements are also beneficial for hardware. PACS: 42.30.Va, 42.30.Wb

  20. Tuning target selection algorithms to improve galaxy redshift estimates

    NASA Astrophysics Data System (ADS)

    Hoyle, Ben; Paech, Kerstin; Rau, Markus Michael; Seitz, Stella; Weller, Jochen

    2016-06-01

    We showcase machine learning (ML) inspired target selection algorithms to determine which of all potential targets should be selected first for spectroscopic follow-up. Efficient target selection can improve the ML redshift uncertainties as calculated on an independent sample, while requiring less targets to be observed. We compare seven different ML targeting algorithms with the Sloan Digital Sky Survey (SDSS) target order, and with a random targeting algorithm. The ML inspired algorithms are constructed iteratively by estimating which of the remaining target galaxies will be most difficult for the ML methods to accurately estimate redshifts using the previously observed data. This is performed by predicting the expected redshift error and redshift offset (or bias) of all of the remaining target galaxies. We find that the predicted values of bias and error are accurate to better than 10-30 per cent of the true values, even with only limited training sample sizes. We construct a hypothetical follow-up survey and find that some of the ML targeting algorithms are able to obtain the same redshift predictive power with 2-3 times less observing time, as compared to that of the SDSS, or random, target selection algorithms. The reduction in the required follow-up resources could allow for a change to the follow-up strategy, for example by obtaining deeper spectroscopy, which could improve ML redshift estimates for deeper test data.

  1. An improved back projection algorithm of ultrasound tomography

    SciTech Connect

    Xiaozhen, Chen; Mingxu, Su; Xiaoshu, Cai

    2014-04-11

    Binary logic back projection algorithm is improved in this work for the development of fast ultrasound tomography system with a better effect of image reconstruction. The new algorithm is characterized by an extra logical value ‘2’ and dual-threshold processing of collected raw data. To compare with the original algorithm, a numerical simulation was conducted by the verification of COMSOL simulations formerly, and then a set of ultrasonic tomography system is established to perform the experiments of one, two and three cylindrical objects. The object images are reconstructed through the inversion of signals matrix acquired by the transducer array after a preconditioning, while the corresponding spatial imaging errors can obviously indicate that the improved back projection method can achieve modified inversion effect.

  2. Note: An iterative algorithm to improve colloidal particle locating.

    PubMed

    Jensen, K E; Nakamura, N

    2016-06-01

    Confocal microscopy of colloids combined with digital image processing has become a powerful tool in soft matter physics and materials science. Together, these techniques enable locating and tracking of more than half a million individual colloidal particles at once. However, despite improvements in locating algorithms that improve position accuracy, it remains challenging to locate all particles in a densely packed, three dimensional colloid without erroneously identifying the same particle more than once. We present a simple iterative algorithm that mitigates both the "missed particle" and "double counting" problems while simultaneously reducing sensitivity to the specific choice of input parameters. It is also useful for analyzing images with spatially varying brightness in which a single set of input parameters is not appropriate for all particles. The algorithm is easy to implement and compatible with existing particle locating software.

  3. Substantial improvement of perovskite solar cells stability by pinhole-free hole transport layer with doping engineering

    PubMed Central

    Jung, Min-Cherl; Raga, Sonia R.; Ono, Luis K.; Qi, Yabing

    2015-01-01

    We fabricated perovskite solar cells using a triple-layer of n-type doped, intrinsic, and p-type doped 2,2′,7,7′-tetrakis(N,N′-di-p-methoxyphenylamine)-9,9′-spirobifluorene (spiro-OMeTAD) (n-i-p) as hole transport layer (HTL) by vacuum evaporation. The doping concentration for n-type doped spiro-OMeTAD was optimized to adjust the highest occupied molecular orbital of spiro-OMeTAD to match the valence band maximum of perovskite for efficient hole extraction while maintaining a high open circuit voltage. Time-dependent solar cell performance measurements revealed significantly improved air stability for perovskite solar cells with the n-i-p structured spiro-OMeTAD HTL showing sustained efficiencies even after 840 h of air exposure. PMID:25985417

  4. Improved algorithms for approximate string matching (extended abstract)

    PubMed Central

    Papamichail, Dimitris; Papamichail, Georgios

    2009-01-01

    Background The problem of approximate string matching is important in many different areas such as computational biology, text processing and pattern recognition. A great effort has been made to design efficient algorithms addressing several variants of the problem, including comparison of two strings, approximate pattern identification in a string or calculation of the longest common subsequence that two strings share. Results We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s - |n - m|)·min(m, n, s) + m + n) and linear space, where s is the edit distance between the two strings. This worst-case time bound sets the quadratic factor of the algorithm independent of the longest string length and improves existing theoretical bounds for this problem. The implementation of our algorithm also excels in practice, especially in cases where the two strings compared differ significantly in length. Conclusion We have provided the design, analysis and implementation of a new algorithm for calculating the edit distance of two strings with both theoretical and practical implications. Source code of our algorithm is available online. PMID:19208109

  5. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    PubMed

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (p<0.05) by the two-step algorithm than by the one-step for 63% of all possible operating points. While operating at a suitable sensitivity level such as 90.8% (79/87) or 88.5% (77/87), the false positive rate was reduced by 24.4% (95% confidence intervals 17.9-31.0%) or 45.8% (95% confidence intervals 40.1-51.0%) respectively. We demonstrated that, with a proper experimental design, the Pareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  6. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  7. Masseter segmentation using an improved watershed algorithm with unsupervised classification.

    PubMed

    Ng, H P; Ong, S H; Foong, K W C; Goh, P S; Nowinski, W L

    2008-02-01

    The watershed algorithm always produces a complete division of the image. However, it is susceptible to over-segmentation and sensitivity to false edges. In medical images this leads to unfavorable representations of the anatomy. We address these drawbacks by introducing automated thresholding and post-segmentation merging. The automated thresholding step is based on the histogram of the gradient magnitude map while post-segmentation merging is based on a criterion which measures the similarity in intensity values between two neighboring partitions. Our improved watershed algorithm is able to merge more than 90% of the initial partitions, which indicates that a large amount of over-segmentation has been reduced. To further improve the segmentation results, we make use of K-means clustering to provide an initial coarse segmentation of the highly textured image before the improved watershed algorithm is applied to it. When applied to the segmentation of the masseter from 60 magnetic resonance images of 10 subjects, the proposed algorithm achieved an overlap index (kappa) of 90.6%, and was able to merge 98% of the initial partitions on average. The segmentation results are comparable to those obtained using the gradient vector flow snake. PMID:17950265

  8. Improving the trust algorithm of information in semantic web

    NASA Astrophysics Data System (ADS)

    Wan, Zong-bao; Min, Jiang

    2012-01-01

    With the rapid development of computer networks, especially with the introduction of the Semantic Web perspective, the problem of trust computation in the network has become an important research part of current computer system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes, the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally, stimulations and analyses show that the improved algorithm can effectively improve the trust of information more accurately.

  9. Improving the trust algorithm of information in semantic web

    NASA Astrophysics Data System (ADS)

    Wan, Zong-Bao; Min, Jiang

    2011-12-01

    With the rapid development of computer networks, especially with the introduction of the Semantic Web perspective, the problem of trust computation in the network has become an important research part of current computer system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes, the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally, stimulations and analyses show that the improved algorithm can effectively improve the trust of information more accurately.

  10. An Improved Population Migration Algorithm Introducing the Local Search Mechanism of the Leap-Frog Algorithm and Crossover Operator

    PubMed Central

    Zhang, Yanqing; Liu, Xueying

    2013-01-01

    The population migration algorithm (PMA) is a simulation of a population of the intelligent algorithm. Given the prematurity and low precision of PMA, this paper introduces a local search mechanism of the leap-frog algorithm and crossover operator to improve the PMA search speed and global convergence properties. The typical test function verifies the improved algorithm through its performance. Compared with the improved population migration and other intelligential algorithms, the result shows that the convergence rate of the improved PMA is very high and its convergence is proved. PMID:23460807

  11. An improved particle swarm optimization algorithm for reliability problems.

    PubMed

    Wu, Peifeng; Gao, Liqun; Zou, Dexuan; Li, Steven

    2011-01-01

    An improved particle swarm optimization (IPSO) algorithm is proposed to solve reliability problems in this paper. The IPSO designs two position updating strategies: In the early iterations, each particle flies and searches according to its own best experience with a large probability; in the late iterations, each particle flies and searches according to the fling experience of the most successful particle with a large probability. In addition, the IPSO introduces a mutation operator after position updating, which can not only prevent the IPSO from trapping into the local optimum, but also enhances its space developing ability. Experimental results show that the proposed algorithm has stronger convergence and stability than the other four particle swarm optimization algorithms on solving reliability problems, and that the solutions obtained by the IPSO are better than the previously reported best-known solutions in the recent literature.

  12. Improving the MODIS Global Snow-Mapping Algorithm

    NASA Technical Reports Server (NTRS)

    Klein, Andrew G.; Hall, Dorothy K.; Riggs, George A.

    1997-01-01

    An algorithm (Snowmap) is under development to produce global snow maps at 500 meter resolution on a daily basis using data from the NASA MODIS instrument. MODIS, the Moderate Resolution Imaging Spectroradiometer, will be launched as part of the first Earth Observing System (EOS) platform in 1998. Snowmap is a fully automated, computationally frugal algorithm that will be ready to implement at launch. Forests represent a major limitation to the global mapping of snow cover as a forest canopy both obscures and shadows the snow underneath. Landsat Thematic Mapper (TM) and MODIS Airborne Simulator (MAS) data are used to investigate the changes in reflectance that occur as a forest stand becomes snow covered and to propose changes to the Snowmap algorithm that will improve snow classification accuracy forested areas.

  13. Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm

    NASA Technical Reports Server (NTRS)

    Riggs, George; Hall, Dorothy K.

    2012-01-01

    The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).

  14. Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms

    PubMed Central

    Petrantonakis, Panagiotis C.; Poirazi, Panayiota

    2015-01-01

    Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG. PMID:25635776

  15. Improved algorithm for solving nonlinear parabolized stability equations

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Zhang, Cun-bo; Liu, Jian-xin; Luo, Ji-sheng

    2016-08-01

    Due to its high computational efficiency and ability to consider nonparallel and nonlinear effects, nonlinear parabolized stability equations (NPSE) approach has been widely used to study the stability and transition mechanisms. However, it often diverges in hypersonic boundary layers when the amplitude of disturbance reaches a certain level. In this study, an improved algorithm for solving NPSE is developed. In this algorithm, the mean flow distortion is included into the linear operator instead of into the nonlinear forcing terms in NPSE. An under-relaxation factor for computing the nonlinear terms is introduced during the iteration process to guarantee the robustness of the algorithm. Two case studies, the nonlinear development of stationary crossflow vortices and the fundamental resonance of the second mode disturbance in hypersonic boundary layers, are presented to validate the proposed algorithm for NPSE. Results from direct numerical simulation (DNS) are regarded as the baseline for comparison. Good agreement can be found between the proposed algorithm and DNS, which indicates the great potential of the proposed method on studying the crossflow and streamwise instability in hypersonic boundary layers. Project supported by the National Natural Science Foundation of China (Grant Nos. 11332007 and 11402167).

  16. Improved algorithm for solving nonlinear parabolized stability equations

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Zhang, Cun-bo; Liu, Jian-xin; Luo, Ji-sheng

    2016-08-01

    Due to its high computational efficiency and ability to consider nonparallel and nonlinear effects, nonlinear parabolized stability equations (NPSE) approach has been widely used to study the stability and transition mechanisms. However, it often diverges in hypersonic boundary layers when the amplitude of disturbance reaches a certain level. In this study, an improved algorithm for solving NPSE is developed. In this algorithm, the mean flow distortion is included into the linear operator instead of into the nonlinear forcing terms in NPSE. An under-relaxation factor for computing the nonlinear terms is introduced during the iteration process to guarantee the robustness of the algorithm. Two case studies, the nonlinear development of stationary crossflow vortices and the fundamental resonance of the second mode disturbance in hypersonic boundary layers, are presented to validate the proposed algorithm for NPSE. Results from direct numerical simulation (DNS) are regarded as the baseline for comparison. Good agreement can be found between the proposed algorithm and DNS, which indicates the great potential of the proposed method on studying the crossflow and streamwise instability in hypersonic boundary layers. Project supported by the National Natural Science Foundation of China (Grant Nos. 11332007 and 11402167).

  17. Improved Gravitation Field Algorithm and Its Application in Hierarchical Clustering

    PubMed Central

    Zheng, Ming; Sun, Ying; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Background Gravitation field algorithm (GFA) is a new optimization algorithm which is based on an imitation of natural phenomena. GFA can do well both for searching global minimum and multi-minima in computational biology. But GFA needs to be improved for increasing efficiency, and modified for applying to some discrete data problems in system biology. Method An improved GFA called IGFA was proposed in this paper. Two parts were improved in IGFA. The first one is the rule of random division, which is a reasonable strategy and makes running time shorter. The other one is rotation factor, which can improve the accuracy of IGFA. And to apply IGFA to the hierarchical clustering, the initial part and the movement operator were modified. Results Two kinds of experiments were used to test IGFA. And IGFA was applied to hierarchical clustering. The global minimum experiment was used with IGFA, GFA, GA (genetic algorithm) and SA (simulated annealing). Multi-minima experiment was used with IGFA and GFA. The two experiments results were compared with each other and proved the efficiency of IGFA. IGFA is better than GFA both in accuracy and running time. For the hierarchical clustering, IGFA is used to optimize the smallest distance of genes pairs, and the results were compared with GA and SA, singular-linkage clustering, UPGMA. The efficiency of IGFA is proved. PMID:23173043

  18. An Improved Neutron Transport Algorithm for Space Radiation

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.

    2000-01-01

    A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.

  19. An improved algorithm for geocentric to geodetic coordinate conversion

    SciTech Connect

    Toms, R.

    1996-02-01

    The problem of performing transformations from geocentric to geodetic coordinates has received an inordinate amount of attention in the literature. Numerous approximate methods have been published. Almost none of the publications address the issue of efficiency and in most cases there is a paucity of error analysis. Recently there has been a surge of interest in this problem aimed at developing more efficient methods for real time applications such as DIS. Iterative algorithms have been proposed that are not of optimal efficiency, address only one error component and require a small but uncertain number of relatively expensive iterations for convergence. In a recent paper published by the author a new algorithm was proposed for the transformation of geocentric to geodetic coordinates. The new algorithm was tested at the Visual Systems Laboratory at the Institute for Simulation and Training, the University of Central Florida, and found to be 30 percent faster than the best previously published algorithm. In this paper further improvements are made in terms of efficiency. For completeness and to make this paper more readable, it was decided to revise the previous paper and to publish it as a new report. The introduction describes the improvements in more detail.

  20. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  1. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  2. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  3. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    NASA Astrophysics Data System (ADS)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  4. Improvement of the Gyocenter-Gauge (G-Gauge) algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Zhi; Qin, Hong

    2011-10-01

    The gyrocenter-gauge (g-gauge) algorithm was improved to simulate rf waves propagating in the three-dimensional sheared magnetic field. The conventional local gyro-center coordinate system (X , Y , Z , μ , θ , u) is constructed on the local magnetic field. When particle travel in a sheared magnetic field, the coordinates of particles must be transformed between different local coordinate systems. To avoid these transformation, a new geometric approach is developed to construct a global Cartesian gyro-center coordinate system (X , Y , Z ,vx ,vy ,vz) , where (X , Y , Z) is the coordinate of the gyro-center, and (vx ,vy ,vz) is the velocity of particle. In the g-gauge theory, the perturbation of distribution function, is obtained from the Lie derivative of gyro-center distribution function F along the perturbing vector field G. The evolution of the first order perturbed distribution contains a term LτLG F =L[τ, G] F , where τ is the Hamilton vector field of unperturbed world-line of particles. It is proved that vector field [τ , G] may be directly solved from the electromagnetic fields. In the improved algorithm, LG F is calculated by integrating along the unperturbed world-line. The improved g-gauge algorithm has been successfully applied to study the propagation and evolution of rf waves in three-dimensional inhomogeneous magnetic field.

  5. Improved metropolis light transport algorithm based on multiple importance sampling

    NASA Astrophysics Data System (ADS)

    He, Huaiqing; Yang, Jiaqian; Liu, Haohan

    2015-12-01

    Metropolis light transport was an unbiased and robust Monte Carlo method, which could efficiently reduce noise during rendering the realistic graphics to resolve the global illumination problem. The basic Metropolis light transport was improved by combining with multiple importance sampling, which better solved the large correlation and high variance between samples caused by the basic Metropolis light transport. The experiences manifested that the quality of images generated by improved algorithm was better compared with the basic Metropolis light transport in the same scenes settings.

  6. Improving permafrost distribution modelling using feature selection algorithms

    NASA Astrophysics Data System (ADS)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  7. Recent ATR and fusion algorithm improvements for multiband sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernández, Manuel

    2009-05-01

    An improved automatic target recognition processing string has been developed. The overall processing string consists of pre-processing, subimage adaptive clutter filtering, normalization, detection, data regularization, feature extraction, optimal subset feature selection, feature orthogonalization and classification processing blocks. The objects that are classified by the 3 distinct ATR strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution three-frequency band sonar imagery. The ATR processing strings were individually tuned to the corresponding three-frequency band data, making use of the new processing improvement, data regularization; this improvement entails computing the input data mean, clipping the data to a multiple of its mean and scaling it, prior to feature extraction and resulted in a 3:1 reduction in false alarms. Two significant fusion algorithm improvements were made. First, a nonlinear exponential Box-Cox expansion (consisting of raising data to a to-be-determined power) feature LLRT fusion algorithm was developed. Second, a repeated application of a subset Box-Cox feature selection / feature orthogonalization / LLRT fusion block was utilized. It was shown that cascaded Box-Cox feature LLRT fusion of the ATR processing strings outperforms baseline "summing" and single-stage Box-Cox feature LLRT algorithms, yielding significant improvements over the best single ATR processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate.

  8. Improving the efficiency of deconvolution algorithms for sound source localization.

    PubMed

    Lylloff, Oliver; Fernández-Grande, Efrén; Agerkvist, Finn; Hald, Jørgen; Roig, Elisabet Tiana; Andersen, Martin S

    2015-07-01

    The localization of sound sources with delay-and-sum (DAS) beamforming is limited by a poor spatial resolution-particularly at low frequencies. Various methods based on deconvolution are examined to improve the resolution of the beamforming map, which can be modeled by a convolution of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms are examined with computer simulations and experimental data. Specifically, the deconvolution problem is solved with a fast gradient projection method called Fast Iterative Shrikage-Thresholding Algorithm (FISTA), and compared with a Fourier-based non-negative least squares algorithm. The results indicate that FISTA tends to provide an improved spatial resolution and is up to 30% faster and more robust to noise. In the spirit of reproducible research, the source code is available online. PMID:26233017

  9. Missile placement analysis based on improved SURF feature matching algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kaida; Zhao, Wenjie; Li, Dejun; Gong, Xiran; Sheng, Qian

    2015-03-01

    The precious battle damage assessment by use of video images to analysis missile placement is a new study area. The article proposed an improved speeded up robust features algorithm named restricted speeded up robust features, which combined the combat application of TV-command-guided missiles and the characteristics of video image. Its restrictions mainly reflected in two aspects, one is to restrict extraction area of feature point; the second is to restrict the number of feature points. The process of missile placement analysis based on video image was designed and a video splicing process and random sample consensus purification were achieved. The RSURF algorithm is proved that has good realtime performance on the basis of guarantee the accuracy.

  10. An improved piecewise linear chaotic map based image encryption algorithm.

    PubMed

    Hu, Yuping; Zhu, Congxu; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.

  11. IMPROVED ALGORITHMS FOR RADAR-BASED RECONSTRUCTION OF ASTEROID SHAPES

    SciTech Connect

    Greenberg, Adam H.; Margot, Jean-Luc

    2015-10-15

    We describe our implementation of a global-parameter optimizer and Square Root Information Filter into the asteroid-modeling software shape. We compare the performance of our new optimizer with that of the existing sequential optimizer when operating on various forms of simulated data and actual asteroid radar data. In all cases, the new implementation performs substantially better than its predecessor: it converges faster, produces shape models that are more accurate, and solves for spin axis orientations more reliably. We discuss potential future changes to improve shape's fitting speed and accuracy.

  12. Improved Algorithms for Radar-based Reconstruction of Asteroid Shapes

    NASA Astrophysics Data System (ADS)

    Greenberg, Adam H.; Margot, Jean-Luc

    2015-10-01

    We describe our implementation of a global-parameter optimizer and Square Root Information Filter into the asteroid-modeling software shape. We compare the performance of our new optimizer with that of the existing sequential optimizer when operating on various forms of simulated data and actual asteroid radar data. In all cases, the new implementation performs substantially better than its predecessor: it converges faster, produces shape models that are more accurate, and solves for spin axis orientations more reliably. We discuss potential future changes to improve shape's fitting speed and accuracy.

  13. A Method of Solving Scheduling Problems Using Improved Guided Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Ou, Gyouhi; Tamura, Hiroki; Tanno, Koichi; Tang, Zheng

    In this paper, an improved guided genetic algorithm is proposed forJob-shop schueduling problem. The proposed method is improved by genetic algorithm using multipliers which can be adjusted during the search process. The simulation results based on some benchmark problems that proves the proposed method can find better solutions than genetic algorithm and original guided genetic algorithm.

  14. Improved algorithm for data conversion from raster to vector

    NASA Astrophysics Data System (ADS)

    Teng, Junhua; Wang, Fahui

    2007-06-01

    Transforming Remote Sensing (RS) classification result from the raster to vector format (R2V) is a common task in Geographic Information Systems (GIS) and RS image processing. R2V acts as a bridge connecting GIS and RS data integration, and is an important module in many commercial software packages such as ENVI and ArcGIS. While considering inconvenience and inefficiency existed in current R2V algorithm, it still has some room to improve. In this paper some technologies and skills are addressed to improve R2V, including sub-image dynamical separation, fast edge tracing, segment combination and partial topology construction. A new method of two-Arm chain edge tracing is introduced. The improved algorithm has so me advantages: It can transform all types of RS classification only once, and build complete topology relationship; The shared edge between two polygons is recorded only once, the diagonal pixels with same attribution are connected automatically; It is scalable while processing large dimension image,it runs fast and enjoys a significant advantage in processing large RS images; It is convenient to edit and modify the vectorised map because of its complete topology information. Based on case study, the preliminary results show its some advantages over Envi and ArcGIS.

  15. Improvement of Passive Microwave Rainfall Retrieval Algorithm over Mountainous Terrain

    NASA Astrophysics Data System (ADS)

    Shige, S.; Yamamoto, M.

    2015-12-01

    The microwave radiometer (MWR) algorithms underestimate heavy rainfall associated with shallow orographic rainfall systems owing to weak ice scattering signatures. Underestimation of the Global Satellite Mapping of Precipitation (GSMaP) MWR has been mitigated by an orographic/nonorographic rainfall classification scheme (Shige et al. 2013, 2015; Taniguchi et al. 2013; Yamamoto and Shige 2015). The orographic/nonorographic rainfall classification scheme is developed on the basis of orographically forced upward vertical motion and the convergence of surface moisture flux estimated from ancillary data. Lookup tables derived from orographic precipitation profiles are used to estimate rainfall for an orographic rainfall pixel, whereas those derived from original precipitation profiles are used to estimate rainfall for a nonorographic rainfall pixel. The orographic/nonorographic rainfall classification scheme has been used by the version of GSMaP products, which are available in near real time (about 4 h after observation) via the Internet (http://sharaku.eorc.jaxa.jp/GSMaP/index.htm). The current version of GSMaP MWR algorithm with the orographic/nonorographic rainfall classification scheme improves rainfall estimation over the entire tropical region, but there is still room for improvement. In this talk, further improvement of orographic rainfall retrievals will be shown.

  16. An algorithm to improve speech recognition in noise for hearing-impaired listeners

    PubMed Central

    Healy, Eric W.; Yoho, Sarah E.; Wang, Yuxuan; Wang, DeLiang

    2013-01-01

    Despite considerable effort, monaural (single-microphone) algorithms capable of increasing the intelligibility of speech in noise have remained elusive. Successful development of such an algorithm is especially important for hearing-impaired (HI) listeners, given their particular difficulty in noisy backgrounds. In the current study, an algorithm based on binary masking was developed to separate speech from noise. Unlike the ideal binary mask, which requires prior knowledge of the premixed signals, the masks used to segregate speech from noise in the current study were estimated by training the algorithm on speech not used during testing. Sentences were mixed with speech-shaped noise and with babble at various signal-to-noise ratios (SNRs). Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions. These increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs. They were also often substantial, allowing several HI listeners to improve intelligibility from scores near zero to values above 70%. PMID:24116438

  17. Improved total variation algorithms for wavelet-based denoising

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2007-04-01

    Many improvements of wavelet-based restoration techniques suggest the use of the total variation (TV) algorithm. The concept of combining wavelet and total variation methods seems effective but the reasons for the success of this combination have been so far poorly understood. We propose a variation of the total variation method designed to avoid artifacts such as oil painting effects and is more suited than the standard TV techniques to be implemented with wavelet-based estimates. We then illustrate the effectiveness of this new TV-based method using some of the latest wavelet transforms such as contourlets and shearlets.

  18. Improving CMD Areal Density Analysis: Algorithms and Strategies

    NASA Astrophysics Data System (ADS)

    Wilson, R. E.

    2014-06-01

    Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD¡¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.

  19. An improved algorithm of fiber tractography demonstrates postischemic cerebral reorganization

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-dong; Lu, Jie; Yao, Li; Li, Kun-cheng; Zhao, Xiao-jie

    2008-03-01

    In vivo white matter tractography by diffusion tensor imaging (DTI) accurately represents the organizational architecture of white matter in the vicinity of brain lesions and especially ischemic brain. In this study, we suggested an improved fiber tracking algorithm based on TEND, called TENDAS, for tensor deflection with adaptive stepping, which had been introduced a stepping framework for interpreting the algorithm behavior as a function of the tensor shape (linear-shaped or not) and tract history. The propagation direction at each step was given by the deflection vector. TENDAS tractography was used to examine a 17-year-old recovery patient with congenital right hemisphere artery stenosis combining with fMRI. Meaningless picture location was used as spatial working memory task in this study. We detected the shifted functional localization to the contralateral homotypic cortex and more prominent and extensive left-sided parietal and medial frontal cortical activations which were used directly as seed mask for tractography for the reconstruction of individual spatial parietal pathways. Comparing with the TEND algorithms, TENDAS shows smoother and less sharp bending characterization of white matter architecture of the parietal cortex. The results of this preliminary study were twofold. First, TENDAS may provide more adaptability and accuracy in reconstructing certain anatomical features, whereas it is very difficult to verify tractography maps of white matter connectivity in the living human brain. Second, our study indicates that combination of TENDAS and fMRI provide a unique image of functional cortical reorganization and structural modifications of postischemic spatial working memory.

  20. Improvements on EMG-based handwriting recognition with DTW algorithm.

    PubMed

    Li, Chengzhang; Ma, Zheren; Yao, Lin; Zhang, Dingguo

    2013-01-01

    Previous works have shown that Dynamic Time Warping (DTW) algorithm is a proper method of feature extraction for electromyography (EMG)-based handwriting recognition. In this paper, several modifications are proposed to improve the classification process and enhance recognition accuracy. A two-phase template making approach has been introduced to generate templates with more salient features, and modified Mahalanobis Distance (mMD) approach is used to replace Euclidean Distance (ED) in order to minimize the interclass variance. To validate the effectiveness of such modifications, experiments were conducted, in which four subjects wrote lowercase letters at a normal speed and four-channel EMG signals from forearms were recorded. Results of offline analysis show that the improvements increased the average recognition accuracy by 9.20%.

  1. Multi-expert tracking algorithm based on improved compressive tracker

    NASA Astrophysics Data System (ADS)

    Feng, Yachun; Zhang, Hong; Yuan, Ding

    2015-12-01

    Object tracking is a challenging task in computer vision. Most state-of-the-art methods maintain an object model and update the object model by using new examples obtained incoming frames in order to deal with the variation in the appearance. It will inevitably introduce the model drift problem into the object model updating frame-by-frame without any censorship mechanism. In this paper, we adopt a multi-expert tracking framework, which is able to correct the effect of bad updates after they happened such as the bad updates caused by the severe occlusion. Hence, the proposed framework exactly has the ability which a robust tracking method should process. The expert ensemble is constructed of a base tracker and its formal snapshot. The tracking result is produced by the current tracker that is selected by means of a simple loss function. We adopt an improved compressive tracker as the base tracker in our work and modify it to fit the multi-expert framework. The proposed multi-expert tracking algorithm significantly improves the robustness of the base tracker, especially in the scenes with frequent occlusions and illumination variations. Experiments on challenging video sequences with comparisons to several state-of-the-art trackers demonstrate the effectiveness of our method and our tracking algorithm can run at real-time.

  2. Improved Bat algorithm for the detection of myocardial infarction.

    PubMed

    Kora, Padmavathi; Kalva, Sri Ramakrishna

    2015-01-01

    The medical practitioners study the electrical activity of the human heart in order to detect heart diseases from the electrocardiogram (ECG) of the heart patients. A myocardial infarction (MI) or heart attack is a heart disease, that occurs when there is a block (blood clot) in the pathway of one or more coronary blood vessels (arteries) that supply blood to the heart muscle. The abnormalities in the heart can be identified by the changes in the ECG signal. The first step in the detection of MI is Preprocessing of ECGs which removes noise by using filters. Feature extraction is the next key process in detecting the changes in the ECG signals. This paper presents a method for extracting key features from each cardiac beat using Improved Bat algorithm. Using this algorithm best features are extracted, then these best (reduced) features are applied to the input of the neural network classifier. It has been observed that the performance of the classifier is improved with the help of the optimized features. PMID:26558169

  3. Improvement of Service Searching Algorithm in the JVO Portal Site

    NASA Astrophysics Data System (ADS)

    Eguchi, S.; Shirasak, Y.; Komiya, Y.; Ohishi, M.; Mizumoto, Y.; Ishihara, Y.; Tsutsumi, J.; Hiyama, T.; Nakamoto, H.; Sakamoto, M.

    2012-09-01

    The Virtual Observatory (VO) consists of a huge amount of astronomical databases which contain both of theoretical and observational data obtained with various methods, telescopes, and instruments. Since VO provides raw and processed observational data, astronomers can concentrate themselves on their scientific interests without awareness of instruments; all they have to know is which service provides their interested data. On the other hand, services on the VO system will be better used if queries can be made by means of telescopes, wavelengths, and object types; currently it is difficult for newcomers to find desired ones. We have recently started a project towards improving the data service functionality and usability on the Japanese VO (JVO) portal site. We are now working on implementation of a function to automatically classify all services on VO in terms of telescopes and instruments without referring to the facility and instrument keywords, which are not always filled in most cases. In the paper, we report a new algorithm towards constructing the facility and instrument keywords from other information of a service, and discuss its effectiveness. We also propose a new user interface of the portal site with this algorithm.

  4. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  5. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-01

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  6. An Efficient and Configurable Preprocessing Algorithm to Improve Stability Analysis.

    PubMed

    Sesia, Ilaria; Cantoni, Elena; Cernigliaro, Alice; Signorile, Giovanna; Fantino, Gianluca; Tavella, Patrizia

    2016-04-01

    The Allan variance (AVAR) is widely used to measure the stability of experimental time series. Specifically, AVAR is commonly used in space applications such as monitoring the clocks of the global navigation satellite systems (GNSSs). In these applications, the experimental data present some peculiar aspects which are not generally encountered when the measurements are carried out in a laboratory. Space clocks' data can in fact present outliers, jumps, and missing values, which corrupt the clock characterization. Therefore, an efficient preprocessing is fundamental to ensure a proper data analysis and improve the stability estimation performed with the AVAR or other similar variances. In this work, we propose a preprocessing algorithm and its implementation in a robust software code (in MATLAB language) able to deal with time series of experimental data affected by nonstationarities and missing data; our method is properly detecting and removing anomalous behaviors, hence making the subsequent stability analysis more reliable. PMID:26540679

  7. An Efficient and Configurable Preprocessing Algorithm to Improve Stability Analysis.

    PubMed

    Sesia, Ilaria; Cantoni, Elena; Cernigliaro, Alice; Signorile, Giovanna; Fantino, Gianluca; Tavella, Patrizia

    2016-04-01

    The Allan variance (AVAR) is widely used to measure the stability of experimental time series. Specifically, AVAR is commonly used in space applications such as monitoring the clocks of the global navigation satellite systems (GNSSs). In these applications, the experimental data present some peculiar aspects which are not generally encountered when the measurements are carried out in a laboratory. Space clocks' data can in fact present outliers, jumps, and missing values, which corrupt the clock characterization. Therefore, an efficient preprocessing is fundamental to ensure a proper data analysis and improve the stability estimation performed with the AVAR or other similar variances. In this work, we propose a preprocessing algorithm and its implementation in a robust software code (in MATLAB language) able to deal with time series of experimental data affected by nonstationarities and missing data; our method is properly detecting and removing anomalous behaviors, hence making the subsequent stability analysis more reliable.

  8. Improving the performance of algorithms to find communities in networks.

    PubMed

    Darst, Richard K; Nussinov, Zohar; Fortunato, Santo

    2014-03-01

    Most algorithms to detect communities in networks typically work without any information on the cluster structure to be found, as one has no a priori knowledge of it, in general. Not surprisingly, knowing some features of the unknown partition could help its identification, yielding an improvement of the performance of the method. Here we show that, if the number of clusters was known beforehand, standard methods, like modularity optimization, would considerably gain in accuracy, mitigating the severe resolution bias that undermines the reliability of the results of the original unconstrained version. The number of clusters can be inferred from the spectra of the recently introduced nonbacktracking and flow matrices, even in benchmark graphs with realistic community structure. The limit of such a two-step procedure is the overhead of the computation of the spectra.

  9. A morphological algorithm for improving radio-frequency interference detection

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; van de Gronde, J. J.; Roerdink, J. B. T. M.

    2012-03-01

    A technique is described that is used to improve the detection of radio-frequency interference in astronomical radio observatories. It is applied on a two-dimensional interference mask after regular detection in the time-frequency domain with existing techniques. The scale-invariant rank (SIR) operator is defined, which is a one-dimensional mathematical morphology technique that can be used to find adjacent intervals in the time or frequency domain that are likely to be affected by RFI. The technique might also be applicable in other areas in which morphological scale-invariant behaviour is desired, such as source detection. A new algorithm is described, that is shown to perform quite well, has linear time complexity and is fast enough to be applied in modern high resolution observatories. It is used in the default pipeline of the LOFAR observatory.

  10. Improved interpretation of satellite altimeter data using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Messa, Kenneth; Lybanon, Matthew

    1992-01-01

    Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.

  11. Improvements and Extensions for Joint Polar Satellite System Algorithms

    NASA Astrophysics Data System (ADS)

    Grant, K. D.; Feeley, J. H.; Miller, S. W.; Jamilkowski, M. L.

    2014-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS replaced the afternoon orbit component and ground processing system of the old POES system managed by the NOAA. JPSS satellites will carry sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for the JPSS is the Common Ground System (CGS), and provides command, control, and communications (C3), data processing and product delivery. CGS's data processing capability processes the data from the JPSS satellites to provide environmental data products (including Sensor Data Records (SDRs) and Environmental Data Records (EDRs)) to the NOAA Satellite Operations Facility. The first satellite in the JPSS constellation, known as the Suomi National Polar-orbiting Partnership (S-NPP) satellite, was launched on 28 October 2011. CGS is currently processing and delivering SDRs and EDRs for S-NPP and will continue through the lifetime of the JPSS program. The EDRs for S-NPP are currently undergoing an extensive Calibration and Validation (Cal/Val) campaign. Changes identified by the Cal/Val campaign are coming available for implementation into the operational system in support of both S-NPP and JPSS-1 (scheduled for launch in 2017). Some of these changes will be available in time to update the S-NPP algorithm baseline, while others will become operational just prior to JPSS-1 launch. In addition, new capabilities, such as higher spectral and spatial resolution, will be exercised on JPSS-1. This paper will describe changes to current algorithms and products as a result of the Cal/Val campaign and related initiatives for improved capabilities. Improvements include Cross Track Infrared Sounder high spectral

  12. An Improved Wind Speed Retrieval Algorithm For The CYGNSS Mission

    NASA Astrophysics Data System (ADS)

    Ruf, C. S.; Clarizia, M. P.

    2015-12-01

    The NASA spaceborne Cyclone Global Navigation Satellite System (CYGNSS) mission is a constellation of 8 microsatellites focused on tropical cyclone (TC) inner core process studies. CYGNSS will be launched in October 2016, and will use GPS-Reflectometry (GPS-R) to measure ocean surface wind speed in all precipitating conditions, and with sufficient frequency to resolve genesis and rapid intensification. Here we present a modified and improved version of the current baseline Level 2 (L2) wind speed retrieval algorithm designed for CYGNSS. An overview of the current approach is first presented, which makes use of two different observables computed from 1-second Level 1b (L1b) delay-Doppler Maps (DDMs) of radar cross section. The first observable, the Delay-Doppler Map Average (DDMA), is the averaged radar cross section over a delay-Doppler window around the DDM peak (i.e. the specular reflection point coordinate in delay and Doppler). The second, the Leading Edge Slope (LES), is the leading edge of the Integrated Delay Waveform (IDW), obtained by integrating the DDM along the Doppler dimension. The observables are calculated over a limited range of time delays and Doppler frequencies to comply with baseline spatial resolution requirements for the retrieved winds, which in the case of CYGNSS is 25 km. In the current approach, the relationship between the observable value and the surface winds is described by an empirical Geophysical Model Function (GMF) that is characterized by a very high slope in the high wind regime, for both DDMA and LES observables, causing large errors in the retrieval at high winds. A simple mathematical modification of these observables is proposed, which linearizes the relationship between ocean surface roughness and the observables. This significantly reduces the non-linearity present in the GMF that relate the observables to the wind speed, and reduces the root-mean square error between true and retrieved winds, particularly in the high wind

  13. An improved marriage in honey bees optimization algorithm for single objective unconstrained optimization.

    PubMed

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.

  14. A Novel Clinical Decision Support System Using Improved Adaptive Genetic Algorithm for the Assessment of Fetal Well-Being

    PubMed Central

    Jambek, Asral Bahari; Neoh, Siew-Chin

    2015-01-01

    A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm. PMID:25793009

  15. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm. PMID:27610308

  16. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  17. Overlay improvements using a real time machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Schmitt-Weaver, Emil; Kubis, Michael; Henke, Wolfgang; Slotboom, Daan; Hoogenboom, Tom; Mulkens, Jan; Coogans, Martyn; ten Berge, Peter; Verkleij, Dick; van de Mast, Frank

    2014-04-01

    While semiconductor manufacturing is moving towards the 14nm node using immersion lithography, the overlay requirements are tightened to below 5nm. Next to improvements in the immersion scanner platform, enhancements in the overlay optimization and process control are needed to enable these low overlay numbers. Whereas conventional overlay control methods address wafer and lot variation autonomously with wafer pre exposure alignment metrology and post exposure overlay metrology, we see a need to reduce these variations by correlating more of the TWINSCAN system's sensor data directly to the post exposure YieldStar metrology in time. In this paper we will present the results of a study on applying a real time control algorithm based on machine learning technology. Machine learning methods use context and TWINSCAN system sensor data paired with post exposure YieldStar metrology to recognize generic behavior and train the control system to anticipate on this generic behavior. Specific for this study, the data concerns immersion scanner context, sensor data and on-wafer measured overlay data. By making the link between the scanner data and the wafer data we are able to establish a real time relationship. The result is an inline controller that accounts for small changes in scanner hardware performance in time while picking up subtle lot to lot and wafer to wafer deviations introduced by wafer processing.

  18. Improvement of unsupervised texture classification based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Togami, Yuuki; Arai, Kohei

    2004-11-01

    At the previous conference, the authors are proposed a new unsupervised texture classification method based on the genetic algorithms (GA). In the method, the GA are employed to determine location and size of the typical textures in the target image. The proposed method consists of the following procedures: 1) the determination of the number of classification category; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed; 6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. However, this method has not been automated because it requires not only target image but also the number of categories for classification. In this paper, we describe some improvement for implementation of automated texture classification. Some experiments are conducted to evaluate classification capability of the proposed method by using images from Brodatz's photo album and actual airborne multispectral scanner. The experimental results show that the proposed method can select appropriate texture samples and can provide reasonable classification results.

  19. Simple algorithm for improved security in the FDDI protocol

    NASA Astrophysics Data System (ADS)

    Lundy, G. M.; Jones, Benjamin

    1993-02-01

    We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.

  20. Improving the Energy Market: Algorithms, Market Implications, and Transmission Switching

    NASA Astrophysics Data System (ADS)

    Lipka, Paula Ann

    This dissertation aims to improve ISO operations through a better real-time market solution algorithm that directly considers both real and reactive power, finds a feasible Alternating Current Optimal Power Flow solution, and allows for solving transmission switching problems in an AC setting. Most of the IEEE systems do not contain any thermal limits on lines, and the ones that do are often not binding. Chapter 3 modifies the thermal limits for the IEEE systems to create new, interesting test cases. Algorithms created to better solve the power flow problem often solve the IEEE cases without line limits. However, one of the factors that makes the power flow problem hard is thermal limits on the lines. The transmission networks in practice often have transmission lines that become congested, and it is unrealistic to ignore line limits. Modifying the IEEE test cases makes it possible for other researchers to be able to test their algorithms on a setup that is closer to the actual ISO setup. This thesis also examines how to convert limits given on apparent power---as is in the case in the Polish test systems---to limits on current. The main consideration in setting line limits is temperature, which linearly relates to current. Setting limits on real or apparent power is actually a proxy for using the limits on current. Therefore, Chapter 3 shows how to convert back to the best physical representation of line limits. A sequential linearization of the current-voltage formulation of the Alternating Current Optimal Power Flow (ACOPF) problem is used to find an AC-feasible generator dispatch. In this sequential linearization, there are parameters that are set to the previous optimal solution. Additionally, to improve accuracy of the Taylor series approximations that are used, the movement of the voltage is restricted. The movement of the voltage is allowed to be very large at the first iteration and is restricted further on each subsequent iteration, with the restriction

  1. Efficient Improvement of Silage Additives by Using Genetic Algorithms

    PubMed Central

    Davies, Zoe S.; Gilbert, Richard J.; Merry, Roger J.; Kell, Douglas B.; Theodorou, Michael K.; Griffith, Gareth W.

    2000-01-01

    The enormous variety of substances which may be added to forage in order to manipulate and improve the ensilage process presents an empirical, combinatorial optimization problem of great complexity. To investigate the utility of genetic algorithms for designing effective silage additive combinations, a series of small-scale proof of principle silage experiments were performed with fresh ryegrass. Having established that significant biochemical changes occur over an ensilage period as short as 2 days, we performed a series of experiments in which we used 50 silage additive combinations (prepared by using eight bacterial and other additives, each of which was added at six different levels, including zero [i.e., no additive]). The decrease in pH, the increase in lactate concentration, and the free amino acid concentration were measured after 2 days and used to calculate a “fitness” value that indicated the quality of the silage (compared to a control silage made without additives). This analysis also included a “cost” element to account for different total additive levels. In the initial experiment additive levels were selected randomly, but subsequently a genetic algorithm program was used to suggest new additive combinations based on the fitness values determined in the preceding experiments. The result was very efficient selection for silages in which large decreases in pH and high levels of lactate occurred along with low levels of free amino acids. During the series of five experiments, each of which comprised 50 treatments, there was a steady increase in the amount of lactate that accumulated; the best treatment combination was that used in the last experiment, which produced 4.6 times more lactate than the untreated silage. The additive combinations that were found to yield the highest fitness values in the final (fifth) experiment were assessed to determine a range of biochemical and microbiological quality parameters during full-term silage

  2. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  3. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  4. Improvements in algorithms for phenotype inference: the NAT2 example.

    PubMed

    Selinski, Silvia; Blaszkewicz, Meinolf; Ickstadt, Katja; Hengstler, Jan G; Golka, Klaus

    2014-02-01

    Numerous studies have analyzed the impact of N-acetyltransferase 2 (NAT2) polymorphisms on drug efficacy, side effects as well as cancer risk. Here, we present the state of the art of deriving haplotypes from polymorphisms and discuss the available software. PHASE v2.1 is currently considered a gold standard for NAT2 haplotype assignment. In vitro studies have shown that some slow acetylation genotypes confer reduced protein stability. This has been observed particularly for G191A, T341C and G590A. Substantial ethnic variations of the acetylation status have been described. Probably, upcoming agriculture and the resulting change in diet caused a selection pressure for slow acetylation. In recent years much research has been done to reduce the complexity of NAT2 genotyping. Deriving the haplotype from seven SNPs is still considered a gold standard. However, meanwhile several studies have shown that a two-SNP combination, C282T and T341C, results in a similarly good distinction in Caucasians. However, attempts to further reduce complexity to only one 'tagging SNP' (rs1495741) may lead to wrong predictions where phenotypically slow acetylators were genotyped as intermediate or rapid. Numerous studies have shown that slow NAT2 haplotypes are associated with increased urinary bladder cancer risk and increased risk of anti-tuberculosis drug-induced hepatotoxicity. A drawback of the current practice of solely discriminating slow, intermediate and rapid genotypes for phenotype inference is limited resolution of differences between slow acetylators. Future developments to differentiate between slow and ultra-slow genotypes may further improve individualized drug dosing and epidemiological studies of cancer risk.

  5. Some Improvements on Signed Window Algorithms for Scalar Multiplications in Elliptic Curve Cryptosystems

    NASA Technical Reports Server (NTRS)

    Vo, San C.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Scalar multiplication is an essential operation in elliptic curve cryptosystems because its implementation determines the speed and the memory storage requirements. This paper discusses some improvements on two popular signed window algorithms for implementing scalar multiplications of an elliptic curve point - Morain-Olivos's algorithm and Koyarna-Tsuruoka's algorithm.

  6. Comparison and improvement of algorithms for computing minimal cut sets

    PubMed Central

    2013-01-01

    Background Constrained minimal cut sets (cMCSs) have recently been introduced as a framework to enumerate minimal genetic intervention strategies for targeted optimization of metabolic networks. Two different algorithmic schemes (adapted Berge algorithm and binary integer programming) have been proposed to compute cMCSs from elementary modes. However, in their original formulation both algorithms are not fully comparable. Results Here we show that by a small extension to the integer program both methods become equivalent. Furthermore, based on well-known preprocessing procedures for integer programming we present efficient preprocessing steps which can be used for both algorithms. We then benchmark the numerical performance of the algorithms in several realistic medium-scale metabolic models. The benchmark calculations reveal (i) that these preprocessing steps can lead to an enormous speed-up under both algorithms, and (ii) that the adapted Berge algorithm outperforms the binary integer approach. Conclusions Generally, both of our new implementations are by at least one order of magnitude faster than other currently available implementations. PMID:24191903

  7. Improvement of phase unwrapping algorithm based on image segmentation and merging

    NASA Astrophysics Data System (ADS)

    Wang, Huaying; Liu, Feifei; Zhu, Qiaofen

    2013-11-01

    A modified algorithm based on image segmentation and merging is proposed and demonstrated to improve the accuracy of the phase unwrapping algorithm. There are three improved aspects. Firstly, the method of unequal region segmentation is taken, which can make the regional information to be completely and accurately reproduced. Secondly, for the condition of noise and undersampling in different regions, different phase unwrapping algorithms are used, respectively. Lastly, for the sake of improving the accuracy of the phase unwrapping results, a method of weighted stack is applied to the overlapping region originated from blocks merging. The proposed algorithm has been verified by simulations and experiments. The results not only validate the accuracy and rapidity of the improved algorithm to recover the phase information of the measured object, but also illustrate the importance of the improved algorithm in Traditional Chinese Medicine Decoction Pieces cell identification.

  8. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    PubMed

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  9. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    PubMed Central

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  10. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    PubMed

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.

  11. A biomimetic algorithm for the improved detection of microarray features

    NASA Astrophysics Data System (ADS)

    Nicolau, Dan V., Jr.; Nicolau, Dan V.; Maini, Philip K.

    2007-02-01

    One the major difficulties of microarray technology relate to the processing of large and - importantly - error-loaded images of the dots on the chip surface. Whatever the source of these errors, those obtained in the first stage of data acquisition - segmentation - are passed down to the subsequent processes, with deleterious results. As it has been demonstrated recently that biological systems have evolved algorithms that are mathematically efficient, this contribution attempts to test an algorithm that mimics a bacterial-"patented" algorithm for the search of available space and nutrients to find, "zero-in" and eventually delimitate the features existent on the microarray surface.

  12. Establishing Substantial Equivalence: Transcriptomics

    NASA Astrophysics Data System (ADS)

    Baudo, María Marcela; Powers, Stephen J.; Mitchell, Rowan A. C.; Shewry, Peter R.

    Regulatory authorities in Western Europe require transgenic crops to be substantially equivalent to conventionally bred forms if they are to be approved for commercial production. One way to establish substantial equivalence is to compare the transcript profiles of developing grain and other tissues of transgenic and conventionally bred lines, in order to identify any unintended effects of the transformation process. We present detailed protocols for transcriptomic comparisons of developing wheat grain and leaf material, and illustrate their use by reference to our own studies of lines transformed to express additional gluten protein genes controlled by their own endosperm-specific promoters. The results show that the transgenes present in these lines (which included those encoding marker genes) did not have any significant unpredicted effects on the expression of endogenous genes and that the transgenic plants were therefore substantially equivalent to the corresponding parental lines.

  13. An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing

    NASA Astrophysics Data System (ADS)

    Zhao, Yunji; Pei, Hailong

    In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.

  14. Improved algorithm for quantum separability and entanglement detection

    SciTech Connect

    Ioannou, L.M.; Ekert, A.K.; Travaglione, B.C.; Cheung, D.

    2004-12-01

    Determining whether a quantum state is separable or entangled is a problem of fundamental importance in quantum information science. It has recently been shown that this problem is NP-hard, suggesting that an efficient, general solution does not exist. There is a highly inefficient 'basic algorithm' for solving the quantum separability problem which follows from the definition of a separable state. By exploiting specific properties of the set of separable states, we introduce a classical algorithm that solves the problem significantly faster than the 'basic algorithm', allowing a feasible separability test where none previously existed, e.g., in 3x3-dimensional systems. Our algorithm also provides a unique tool in the experimental detection of entanglement.

  15. Improved Clonal Selection Algorithm Combined with Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Gao, Shangce; Wang, Wei; Dai, Hongwei; Li, Fangjia; Tang, Zheng

    Both the clonal selection algorithm (CSA) and the ant colony optimization (ACO) are inspired by natural phenomena and are effective tools for solving complex problems. CSA can exploit and explore the solution space parallely and effectively. However, it can not use enough environment feedback information and thus has to do a large redundancy repeat during search. On the other hand, ACO is based on the concept of indirect cooperative foraging process via secreting pheromones. Its positive feedback ability is nice but its convergence speed is slow because of the little initial pheromones. In this paper, we propose a pheromone-linker to combine these two algorithms. The proposed hybrid clonal selection and ant colony optimization (CSA-ACO) reasonably utilizes the superiorities of both algorithms and also overcomes their inherent disadvantages. Simulation results based on the traveling salesman problems have demonstrated the merit of the proposed algorithm over some traditional techniques.

  16. An Improved Recovery Algorithm for Decayed AES Key Schedule Images

    NASA Astrophysics Data System (ADS)

    Tsow, Alex

    A practical algorithm that recovers AES key schedules from decayed memory images is presented. Halderman et al. [1] established this recovery capability, dubbed the cold-boot attack, as a serious vulnerability for several widespread software-based encryption packages. Our algorithm recovers AES-128 key schedules tens of millions of times faster than the original proof-of-concept release. In practice, it enables reliable recovery of key schedules at 70% decay, well over twice the decay capacity of previous methods. The algorithm is generalized to AES-256 and is empirically shown to recover 256-bit key schedules that have suffered 65% decay. When solutions are unique, the algorithm efficiently validates this property and outputs the solution for memory images decayed up to 60%.

  17. An Improved DINEOF Algorithm for Filling Missing Values in Spatio-Temporal Sea Surface Temperature Data

    PubMed Central

    Ping, Bo; Su, Fenzhen; Meng, Yunshan

    2016-01-01

    In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time. PMID:27195692

  18. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS.

    PubMed

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-01-01

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS.

  19. An Improved Inertial Frame Alignment Algorithm Based on Horizontal Alignment Information for Marine SINS

    PubMed Central

    Che, Yanting; Wang, Qiuying; Gao, Wei; Yu, Fei

    2015-01-01

    In this paper, an improved inertial frame alignment algorithm for a marine SINS under mooring conditions is proposed, which significantly improves accuracy. Since the horizontal alignment is easy to complete, and a characteristic of gravity is that its component in the horizontal plane is zero, we use a clever method to improve the conventional inertial alignment algorithm. Firstly, a large misalignment angle model and a dimensionality reduction Gauss-Hermite filter are employed to establish the fine horizontal reference frame. Based on this, the projection of the gravity in the body inertial coordinate frame can be calculated easily. Then, the initial alignment algorithm is accomplished through an inertial frame alignment algorithm. The simulation and experiment results show that the improved initial alignment algorithm performs better than the conventional inertial alignment algorithm, and meets the accuracy requirements of a medium-accuracy marine SINS. PMID:26445048

  20. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the

  1. Establishing Substantial Equivalence: Metabolomics

    NASA Astrophysics Data System (ADS)

    Beale, Michael H.; Ward, Jane L.; Baker, John M.

    Modern ‘metabolomic’ methods allow us to compare levels of many structurally diverse compounds in an automated fashion across a large number of samples. This technology is ideally suited to screening of populations of plants, including trials where the aim is the determination of unintended effects introduced by GM. A number of metabolomic methods have been devised for the determination of substantial equivalence. We have developed a methodology, using [1H]-NMR fingerprinting, for metabolomic screening of plants and have applied it to the study of substantial equivalence of field-grown GM wheat. We describe here the principles and detail of that protocol as applied to the analysis of flour generated from field plots of wheat. Particular emphasis is given to the downstream data processing and comparison of spectra by multivariate analysis, from which conclusions regarding metabolome changes due to the GM can be assessed against the background of natural variation due to environment.

  2. A new improved artificial bee colony algorithm for ship hull form optimization

    NASA Astrophysics Data System (ADS)

    Huang, Fuxin; Wang, Lijue; Yang, Chi

    2016-04-01

    The artificial bee colony (ABC) algorithm is a relatively new swarm intelligence-based optimization algorithm. Its simplicity of implementation, relatively few parameter settings and promising optimization capability make it widely used in different fields. However, it has problems of slow convergence due to its solution search equation. Here, a new solution search equation based on a combination of the elite solution pool and the block perturbation scheme is proposed to improve the performance of the algorithm. In addition, two different solution search equations are used by employed bees and onlooker bees to balance the exploration and exploitation of the algorithm. The developed algorithm is validated by a set of well-known numerical benchmark functions. It is then applied to optimize two ship hull forms with minimum resistance. The tested results show that the proposed new improved ABC algorithm can outperform the ABC algorithm in most of the tested problems.

  3. Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.

    PubMed

    Steimers, A; Farnung, W; Kohl-Bareis, M

    2016-01-01

    We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code. PMID:26782241

  4. An Improved QRS Wave Group Detection Algorithm and Matlab Implementation

    NASA Astrophysics Data System (ADS)

    Zhang, Hongjun

    This paper presents an algorithm using Matlab software to detect QRS wave group of MIT-BIH ECG database. First of all the noise in ECG be Butterworth filtered, and then analysis the ECG signal based on wavelet transform to detect the parameters of the principle of singularity, more accurate detection of the QRS wave group was achieved.

  5. Crossover Improvement for the Genetic Algorithm in Information Retrieval.

    ERIC Educational Resources Information Center

    Vrajitoru, Dana

    1998-01-01

    In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…

  6. Motion Cueing Algorithm Modification for Improved Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Zaychik, Kirill; Kelly, Lon C.; Houck, Jacob

    2009-01-01

    Atmospheric turbulence cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. Cardullo and Ellor initially addressed this problem by directly porting the turbulence model output to the motion system. Reid and Robinson addressed the problem by employing a parallel aircraft model, which is only stimulated by the turbulence inputs and adding a filter specially designed to pass the higher turbulence frequencies. There have been advances in motion cueing algorithm development at the Man-Machine Systems Laboratory, at SUNY Binghamton. In particular, the system used to generate turbulence cues has been studied. The Reid approach, implemented by Telban and Cardullo, was employed to augment the optimal motion cueing algorithm installed at the NASA LaRC Simulation Laboratory, driving the Visual Motion Simulator. In this implementation, the output of the primary flight channel was added to the output of the turbulence channel and then sent through a non-linear cueing filter. The cueing filter is an adaptive filter; therefore, it is not desirable for the output of the turbulence channel to be augmented by this type of filter. The likelihood of the signal becoming divergent was also an issue in this design. After testing on-site it became apparent that the architecture of the turbulence algorithm was generating unacceptable cues. As mentioned above, this cueing algorithm comprised a filter that was designed to operate at low bandwidth. Therefore, the turbulence was also filtered, augmenting the cues generated by the model. If any filtering is to be done to the turbulence, it will utilize a filter with a much higher bandwidth, above the frequencies produced by the aircraft response to turbulence. The authors have developed an implementation wherein only the signal from the primary flight channel passes through the nonlinear cueing filter. This paper discusses three

  7. Improvement and analysis of ID3 algorithm in decision-making tree

    NASA Astrophysics Data System (ADS)

    Xie, Xiao-Lan; Long, Zhen; Liao, Wen-Qi

    2015-12-01

    For the cooperative system under development, it needs to use the spatial analysis and relative technology concerning data mining in order to carry out the detection of the subject conflict and redundancy, while the ID3 algorithm is an important data mining. Due to the traditional ID3 algorithm in the decision-making tree towards the log part is rather complicated, this paper obtained a new computational formula of information gain through the optimization of algorithm of the log part. During the experiment contrast and theoretical analysis, it is found that IID3 (Improved ID3 Algorithm) algorithm owns higher calculation efficiency and accuracy and thus worth popularizing.

  8. An improved marriage in honey bees optimization algorithm for single objective unconstrained optimization.

    PubMed

    Celik, Yuksel; Ulker, Erkan

    2013-01-01

    Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416

  9. Obstacle avoidance planning of space manipulator end-effector based on improved ant colony algorithm.

    PubMed

    Zhou, Dongsheng; Wang, Lan; Zhang, Qiang

    2016-01-01

    With the development of aerospace engineering, the space on-orbit servicing has been brought more attention to many scholars. Obstacle avoidance planning of space manipulator end-effector also attracts increasing attention. This problem is complex due to the existence of obstacles. Therefore, it is essential to avoid obstacles in order to improve planning of space manipulator end-effector. In this paper, we proposed an improved ant colony algorithm to solve this problem, which is effective and simple. Firstly, the models were established respectively, including the kinematic model of space manipulator and expression of valid path in space environment. Secondly, we described an improved ant colony algorithm in detail, which can avoid trapping into local optimum. The search strategy, transfer rules, and pheromone update methods were all adjusted. Finally, the improved ant colony algorithm was compared with the classic ant colony algorithm through the experiments. The simulation results verify the correctness and effectiveness of the proposed algorithm. PMID:27186473

  10. Protein Sequence Classification with Improved Extreme Learning Machine Algorithms

    PubMed Central

    2014-01-01

    Precisely classifying a protein sequence from a large biological protein sequences database plays an important role for developing competitive pharmacological products. Comparing the unseen sequence with all the identified protein sequences and returning the category index with the highest similarity scored protein, conventional methods are usually time-consuming. Therefore, it is urgent and necessary to build an efficient protein sequence classification system. In this paper, we study the performance of protein sequence classification using SLFNs. The recent efficient extreme learning machine (ELM) and its invariants are utilized as the training algorithms. The optimal pruned ELM is first employed for protein sequence classification in this paper. To further enhance the performance, the ensemble based SLFNs structure is constructed where multiple SLFNs with the same number of hidden nodes and the same activation function are used as ensembles. For each ensemble, the same training algorithm is adopted. The final category index is derived using the majority voting method. Two approaches, namely, the basic ELM and the OP-ELM, are adopted for the ensemble based SLFNs. The performance is analyzed and compared with several existing methods using datasets obtained from the Protein Information Resource center. The experimental results show the priority of the proposed algorithms. PMID:24795876

  11. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation.

    PubMed

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA.

  12. An improved cooperative adaptive cruise control (CACC) algorithm considering invalid communication

    NASA Astrophysics Data System (ADS)

    Wang, Pangwei; Wang, Yunpeng; Yu, Guizhen; Tang, Tieqiao

    2014-05-01

    For the Cooperative Adaptive Cruise Control (CACC) Algorithm, existing research studies mainly focus on how inter-vehicle communication can be used to develop CACC controller, the influence of the communication delays and lags of the actuators to the string stability. However, whether the string stability can be guaranteed when inter-vehicle communication is invalid partially has hardly been considered. This paper presents an improved CACC algorithm based on the sliding mode control theory and analyses the range of CACC controller parameters to maintain string stability. A dynamic model of vehicle spacing deviation in a platoon is then established, and the string stability conditions under improved CACC are analyzed. Unlike the traditional CACC algorithms, the proposed algorithm can ensure the functionality of the CACC system even if inter-vehicle communication is partially invalid. Finally, this paper establishes a platoon of five vehicles to simulate the improved CACC algorithm in MATLAB/Simulink, and the simulation results demonstrate that the improved CACC algorithm can maintain the string stability of a CACC platoon through adjusting the controller parameters and enlarging the spacing to prevent accidents. With guaranteed string stability, the proposed CACC algorithm can prevent oscillation of vehicle spacing and reduce chain collision accidents under real-world circumstances. This research proposes an improved CACC algorithm, which can guarantee the string stability when inter-vehicle communication is invalid.

  13. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  14. Incorporating the Last Four Digits of Social Security Numbers Substantially Improves Linking Patient Data from De-identified Hospital Claims Databases

    PubMed Central

    Naessens, James M; Visscher, Sue L; Peterson, Stephanie M; Swanson, Kristi M; Johnson, Matthew G; Rahman, Parvez A; Schindler, Joe; Sonneborn, Mark; Fry, Donald E; Pine, Michael

    2015-01-01

    Objective Assess algorithms for linking patients across de-identified databases without compromising confidentiality. Data Sources/Study Setting Hospital discharges from 11 Mayo Clinic hospitals during January 2008–September 2012 (assessment and validation data). Minnesota death certificates and hospital discharges from 2009 to 2012 for entire state (application data). Study Design Cross-sectional assessment of sensitivity and positive predictive value (PPV) for four linking algorithms tested by identifying readmissions and posthospital mortality on the assessment data with application to statewide data. Data Collection/Extraction Methods De-identified claims included patient gender, birthdate, and zip code. Assessment records were matched with institutional sources containing unique identifiers and the last four digits of Social Security number (SSNL4). Principal Findings Gender, birthdate, and five-digit zip code identified readmissions with a sensitivity of 98.0 percent and a PPV of 97.7 percent and identified postdischarge mortality with 84.4 percent sensitivity and 98.9 percent PPV. Inclusion of SSNL4 produced nearly perfect identification of readmissions and deaths. When applied statewide, regions bordering states with unavailable hospital discharge data had lower rates. Conclusion Addition of SSNL4 to administrative data, accompanied by appropriate data use and data release policies, can enable trusted repositories to link data with nearly perfect accuracy without compromising patient confidentiality. States maintaining centralized de-identified databases should add SSNL4 to data specifications. PMID:26073819

  15. An effective hybrid cuckoo search algorithm with improved shuffled frog leaping algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  16. An effective hybrid cuckoo search algorithm with improved shuffled frog leaping algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm.

  17. An Effective Hybrid Cuckoo Search Algorithm with Improved Shuffled Frog Leaping Algorithm for 0-1 Knapsack Problems

    PubMed Central

    Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  18. Establishing Substantial Equivalence: Proteomics

    NASA Astrophysics Data System (ADS)

    Lovegrove, Alison; Salt, Louise; Shewry, Peter R.

    Wheat is a major crop in world agriculture and is consumed after processing into a range of food products. It is therefore of great importance to determine the consequences (intended and unintended) of transgenesis in wheat and whether genetically modified lines are substantially equivalent to those produced by conventional plant breeding. Proteomic analysis is one of several approaches which can be used to address these questions. Two-dimensional PAGE (2D PAGE) remains the most widely available method for proteomic analysis, but is notoriously difficult to reproduce between laboratories. We therefore describe methods which have been developed as standard operating procedures in our laboratory to ensure the reproducibility of proteomic analyses of wheat using 2D PAGE analysis of grain proteins.

  19. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  20. Research on an Improved Medical Image Enhancement Algorithm Based on P-M Model.

    PubMed

    Dong, Beibei; Yang, Jingjing; Hao, Shangfu; Zhang, Xiao

    2015-01-01

    Image enhancement can improve the detail of the image and so as to achieve the purpose of the identification of the image. At present, the image enhancement is widely used in medical images, which can help doctor's diagnosis. IEABPM (Image Enhancement Algorithm Based on P-M Model) is one of the most common image enhancement algorithms. However, it may cause the lost of the texture details and other features. To solve the problems, this paper proposes an IIEABPM (Improved Image Enhancement Algorithm Based on P-M Model). Simulation demonstrates that IIEABPM can effectively solve the problems of IEABPM, and improve image clarity, image contrast, and image brightness. PMID:26628929

  1. Multiangle dynamic light scattering analysis using an improved recursion algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Li, Wei; Wang, Wanyan; Zeng, Xianjiang; Chen, Junyao; Du, Peng; Yang, Kecheng

    2015-10-01

    Multiangle dynamic light scattering (MDLS) compensates for the low information in a single-angle dynamic light scattering (DLS) measurement by combining the light intensity autocorrelation functions from a number of measurement angles. Reliable estimation of PSD from MDLS measurements requires accurate determination of the weighting coefficients and an appropriate inversion method. We propose the Recursion Nonnegative Phillips-Twomey (RNNPT) algorithm, which is insensitive to the noise of correlation function data, for PSD reconstruction from MDLS measurements. The procedure includes two main steps: 1) the calculation of the weighting coefficients by the recursion method, and 2) the PSD estimation through the RNNPT algorithm. And we obtained suitable regularization parameters for the algorithm by using MR-L-curve since the overall computational cost of this method is sensibly less than that of the L-curve for large problems. Furthermore, convergence behavior of the MR-L-curve method is in general superior to that of the L-curve method and the error of MR-L-curve method is monotone decreasing. First, the method was evaluated on simulated unimodal lognormal PSDs and multimodal lognormal PSDs. For comparison, reconstruction results got by a classical regularization method were included. Then, to further study the stability and sensitivity of the proposed method, all examples were analyzed using correlation function data with different levels of noise. The simulated results proved that RNNPT method yields more accurate results in the determination of PSDs from MDLS than those obtained with the classical regulation method for both unimodal and multimodal PSDs.

  2. Improvements of DRISM calculations: symmetry reduction and hybrid algorithms.

    PubMed

    Woelki, Stefan; Kohler, Hans-Helmut; Krienke, Hartmut; Schmeer, Georg

    2008-02-14

    We present a symmetry-reduced version of the dielectrically-consistent reference interaction site model (DRISM) equation and an adaptation of the Labík-Malijevský-Vonka hybrid algorithm for its numerical solution. This approach is used for the calculation of site-site correlation functions of water, acetone and a water-acetone mixture. Compared to the traditional Picard iteration of non-reduced DRISM theories, savings of more than 90% in computational time are obtained. The resulting site-site pair-correlation functions are in reasonable agreement with computer simulations. PMID:18231692

  3. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.

    PubMed

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  4. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion

    PubMed Central

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  5. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.

    PubMed

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-08-31

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.

  6. Improved Diagnostic Validity of the ADOS Revised Algorithms: A Replication Study in an Independent Sample

    ERIC Educational Resources Information Center

    Oosterling, Iris; Roos, Sascha; de Bildt, Annelies; Rommelse, Nanda; de Jonge, Maretha; Visser, Janne; Lappenschaar, Martijn; Swinkels, Sophie; van der Gaag, Rutger Jan; Buitelaar, Jan

    2010-01-01

    Recently, Gotham et al. ("2007") proposed revised algorithms for the Autism Diagnostic Observation Schedule (ADOS) with improved diagnostic validity. The aim of the current study was to replicate predictive validity, factor structure, and correlations with age and verbal and nonverbal IQ of the ADOS revised algorithms for Modules 1 and 2 in a…

  7. An improved collimation algorithm for the Large Binocular Telescope using source extractor and an on-the-fly reconstructor

    NASA Astrophysics Data System (ADS)

    Miller, Douglas L.; Rakich, Andrew; Leibold, Torsten

    2012-09-01

    A recent upgrade of the LBTO’s Wavefront Reconstruction algorithm in the Active Optics system has proven to reduce the collimation time by a substantial amount and to provide a much more stable telescope collimation as observing conditions change. The new reconstruction algorithm uses Source Extractor to detect the spots in a Shack-Hartmann wavefront sensor camera image. With information about which Shack spots are detected, a reconstructor matrix is calculated on-the-fly that only includes the illuminated sub-apertures. This drastically improves the wavefront reconstruction for a highly aberrated wavefront when many sub-apertures contain no information. This is generally the situation at the beginning of the night when the collimation of the telescope is set only from models rather than on-sky information and occasionally when a new observational target is acquired. Similarly, the undersized tertiary mirror can cause vignetting of the pupil seen by the Shack-Hartmann wavefront sensor for far off-axis guide stars and again some sub-apertures have no wavefront information. We will present a brief description of the Active Optics system used at the Gregorian focal stations at the LBTO, discuss the original wavefront reconstruction algorithm, describe the new Source Extractor algorithm and compare the performance of these two approaches in several conditions (low signal to noise, highly aberrated wavefront, vignetted pupil, poor seeing).

  8. Have We Substantially Underestimated the Impact of Improved Sanitation Coverage on Child Health? A Generalized Additive Model Panel Analysis of Global Data on Child Mortality and Malnutrition

    PubMed Central

    Prüss-Ustün, Annette

    2016-01-01

    Background Although widely accepted as being one of the most important public health advances of the past hundred years, the contribution that improving sanitation coverage can make to child health is still unclear, especially since the publication of two large studies of sanitation in India which found no effect on child morbidity. We hypothesis that the value of sanitation does not come directly from use of improved sanitation but from improving community coverage. If this is so we further hypothesise that the relationship between sanitation coverage and child health will be non-linear and that most of any health improvement will accrue as sanitation becomes universal. Methods We report a fixed effects panel analysis of country level data using Generalized Additive Models in R. Outcome variables were under 5 childhood mortality, neonatal mortality, under 5 childhood mortality from diarrhoea, proportion of children under 5 with stunting and with underweight. Predictor variables were % coverage by improved sanitation, improved water source, Gross Domestic Product per capita and Health Expenditure per capita. We also identified three studies reporting incidence of diarrhoea in children under five alongside gains in community coverage in improved sanitation. Findings For each of the five outcome variables, sanitation coverage was independently associated with the outcome but this association was highly non-linear. Improving sanitation coverage was very strongly associated with under 5 years diarrhoea mortality, under 5years all-cause mortality, and all-cause neonatal mortality. There was a decline as sanitation coverage increased up to about 20% but then no further decline was seen until about 70% (60% for diarrhoea mortality and 80% for neonatal mortality, respectively). The association was less strong for stunting and underweight but a threshold about 50% coverage was also seen. Three large trials of sanitation on diarrhoea morbidity gave results that were similar

  9. DTL: a language to assist cardiologists in improving classification algorithms.

    PubMed

    Kors, J A; Kamp, D M; Henkemans, D P; van Bemmel, J H

    1991-06-01

    Heuristic classifiers, e.g., for diagnostic classification of the electrocardiogram, can be very complex. The development and refinement of such classifiers is cumbersome and time-consuming. Generally, it requires a computer expert to implement the cardiologist's diagnostic reasoning into computer language. The average cardiologist, however, is not able to verify whether his intentions have been properly realized and perform as he hoped for. But also for the initiated, it often remains obscure how a particular result was reached by a complex classification program. An environment is presented which solves these problems. The environment consists of a language, DTL (Decision Tree Language), that allows cardiologists to express their classification algorithms in a way that is familiar to them, and an interpreter and translator for that language. The considerations in the design of DTL are described and the structure and capabilities of the interpreter and translator are discussed.

  10. Improving ecological forecasts of copepod community dynamics using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Record, N. R.; Pershing, A. J.; Runge, J. A.; Mayo, C. A.; Monger, B. C.; Chen, C.

    2010-08-01

    The validity of computational models is always in doubt. Skill assessment and validation are typically done by demonstrating that output is in agreement with empirical data. We test this approach by using a genetic algorithm to parameterize a biological-physical coupled copepod population dynamics computation. The model is applied to Cape Cod Bay, Massachusetts, and is designed for operational forecasting. By running twin experiments on terms in this dynamical system, we demonstrate that a good fit to data does not necessarily imply a valid parameterization. An ensemble of good fits, however, provides information on the accuracy of parameter values, on the functional importance of parameters, and on the ability to forecast accurately with an incorrect set of parameters. Additionally, we demonstrate that the technique is a useful tool for operational forecasting.

  11. Improved Algorithms for Radar-Based Reconstruction of Asteroid Spin States and Shapes

    NASA Astrophysics Data System (ADS)

    Greenberg, Adam; Margot, Jean-Luc

    2015-11-01

    Earth-based radar is a powerful tool for gathering information about bodies in the Solar System. Radar observations can dramatically improve the determination of the physical properties and orbital elements of small bodies (such as asteroids and comets). An important development in the past two decades has been the formulation and implementation of algorithms for asteroid shape reconstruction based on radar data.Because of the nature of radar data, recovery of the spin state depends on knowledge of the shape and vice versa. Even with perfect spin state information, certain peculiarities of radar images (such as the two-to-one or several-to-one mapping between surface elements on the object and pixels within the radar image) make recovery of the physical shape challenging. This is a computationally intensive problem, potentially involving hundreds to thousands of free parameters and millions of data points.The method by which radar-based shape and spin state modelling is currently accomplished, a Sequential Parameter Fit (SPF), is relatively slow, and incapable of determining the spin state of an asteroid from radar images without substantial user intervention.We implemented a global-parameter optimizer and Square Root Information Filter (SRIF) into the asteroid-modelling software shape. This optimizer can find shapes more quickly than the current method and can determine the asteroid’s spin state.We ran our new algorithm, along with the existing SPF, through several tests, composed of both real and simulated data. The simulated data were composed of noisy images of procedurally generated shapes, as well as noisy images of existing shape models. The real data included recent observations of both 2000 ET70 and 1566 Icarus.These tests indicate that SRIF is faster and more accurate than SPF. In addition, SRIF can autonomously determine the spin state of an asteroid from a variety of starting conditions, a considerable advance over the existing algorithm. We will

  12. An Improved Algorithm for Retrieving Surface Downwelling Longwave Radiation from Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.

    2007-01-01

    Zhou and Cess [2001] developed an algorithm for retrieving surface downwelling longwave radiation (SDLW) based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for scenes that were covered with ice clouds. An improved version of the algorithm prevents the large errors in the SDLW at low water vapor amounts by taking into account that under such conditions the SDLW and water vapor amount are nearly linear in their relationship. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths available from the Cloud and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) product to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing and will be incorporated as one of the CERES empirical surface radiation algorithms.

  13. An improved filter-u least mean square vibration control algorithm for aircraft framework.

    PubMed

    Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu

    2014-09-01

    Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance.

  14. An improved filter-u least mean square vibration control algorithm for aircraft framework

    NASA Astrophysics Data System (ADS)

    Huang, Quanzhen; Luo, Jun; Gao, Zhiyuan; Zhu, Xiaojin; Li, Hengyu

    2014-09-01

    Active vibration control of aerospace vehicle structures is very a hot spot and in which filter-u least mean square (FULMS) algorithm is one of the key methods. But for practical reasons and technical limitations, vibration reference signal extraction is always a difficult problem for FULMS algorithm. To solve the vibration reference signal extraction problem, an improved FULMS vibration control algorithm is proposed in this paper. Reference signal is constructed based on the controller structure and the data in the algorithm process, using a vibration response residual signal extracted directly from the vibration structure. To test the proposed algorithm, an aircraft frame model is built and an experimental platform is constructed. The simulation and experimental results show that the proposed algorithm is more practical with a good vibration suppression performance.

  15. Improved Exact Enumerative Algorithms for the Planted (l, d)-Motif Search Problem.

    PubMed

    Tanaka, Shunji

    2014-01-01

    In this paper efficient exact algorithms are proposed for the planted ( l, d)-motif search problem. This problem is to find all motifs of length l that are planted in each input string with at most d mismatches. The "quorum" version of this problem is also treated in this paper to find motifs planted not in all input strings but in at least q input strings. The proposed algorithms are based on the previous algorithms called qPMSPruneI and qPMS7 that traverse a search tree starting from a l-length substring of an input string. To improve these previous algorithms, several techniques are introduced, which contribute to reducing the computation time for the traversal. In computational experiments, it will be shown that the proposed algorithms outperform the previous algorithms.

  16. Study of improved ray tracing parallel algorithm for CGH of 3D objects on GPU

    NASA Astrophysics Data System (ADS)

    Cong, Bin; Jiang, Xiaoyu; Yao, Jun; Zhao, Kai

    2014-11-01

    An improved parallel algorithm for holograms of three-dimensional objects was presented. According to the physical characteristics and mathematical properties of the original ray tracing algorithm for computer generated holograms (CGH), using transform approximation and numerical analysis methods, we extract parts of ray tracing algorithm which satisfy parallelization features and implement them on graphics processing unit (GPU). Meanwhile, through proper design of parallel numerical procedure, we did parallel programming to the two-dimensional slices of three-dimensional object with CUDA. According to the experiments, an effective method of dealing with occlusion problem in ray tracing is proposed, as well as generating the holograms of 3D objects with additive property. Our results indicate that the improved algorithm can effectively shorten the computing time. Due to the different sizes of spatial object points and hologram pixels, the speed has increased 20 to 70 times comparing with original ray tracing algorithm.

  17. Quantifying dynamic sensitivity of optimization algorithm parameters to improve hydrological model calibration

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng

    2016-02-01

    It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.

  18. An improved space-based algorithm for recognizing vehicle models from the side view

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Ding, Youdong; Zhang, Li; Li, Rong; Zhu, Jiang; Xie, Zhifeng

    2015-12-01

    Vehicle model matching problem from the side view is a problem meets the practical needs of actual users, but less focus by researchers. We propose a improved feature space-based algorithm for this problem. The algorithm combines the various advantages of some classic algorithms, and effectively combining global and local feature, eliminate data redundancy and improve data divisibility. And finally complete the classification by quick and efficient KNN. The real scene test results show that the proposed method is robust, accurate, insensitive to external factors, adaptable to large angle deviations, and can be applied to a formal application.

  19. An improved label propagation algorithm using average node energy in complex networks

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Zhao, Dandan; Li, Lin; Lu, Jianfeng; Han, Jianmin; Wu, Songyang

    2016-10-01

    Detecting overlapping community structure can give a significant insight into structural and functional properties in complex networks. In this Letter, we propose an improved label propagation algorithm (LPA) to uncover overlapping community structure. After mapping nodes into random variables, the algorithm calculates variance of each node and the proposed average node energy. The nodes whose variances are less than a tunable threshold are regarded as bridge nodes and meanwhile changing the given threshold can uncover some latent bridge node. Simulation results in real-world and artificial networks show that the improved algorithm is efficient in revealing overlapping community structures.

  20. Research on WNN modeling for gold price forecasting based on improved artificial bee colony algorithm.

    PubMed

    Li, Bai

    2014-01-01

    Gold price forecasting has been a hot issue in economics recently. In this work, wavelet neural network (WNN) combined with a novel artificial bee colony (ABC) algorithm is proposed for this gold price forecasting issue. In this improved algorithm, the conventional roulette selection strategy is discarded. Besides, the convergence statuses in a previous cycle of iteration are fully utilized as feedback messages to manipulate the searching intensity in a subsequent cycle. Experimental results confirm that this new algorithm converges faster than the conventional ABC when tested on some classical benchmark functions and is effective to improve modeling capacity of WNN regarding the gold price forecasting scheme.

  1. Research on WNN Modeling for Gold Price Forecasting Based on Improved Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Gold price forecasting has been a hot issue in economics recently. In this work, wavelet neural network (WNN) combined with a novel artificial bee colony (ABC) algorithm is proposed for this gold price forecasting issue. In this improved algorithm, the conventional roulette selection strategy is discarded. Besides, the convergence statuses in a previous cycle of iteration are fully utilized as feedback messages to manipulate the searching intensity in a subsequent cycle. Experimental results confirm that this new algorithm converges faster than the conventional ABC when tested on some classical benchmark functions and is effective to improve modeling capacity of WNN regarding the gold price forecasting scheme. PMID:24744773

  2. [An improved wavelet threshold algorithm for ECG denoising].

    PubMed

    Liu, Xiuling; Qiao, Lei; Yang, Jianli; Dong, Bin; Wang, Hongrui

    2014-06-01

    Due to the characteristics and environmental factors, electrocardiogram (ECG) signals are usually interfered by noises in the course of signal acquisition, so it is crucial for ECG intelligent analysis to eliminate noises in ECG signals. On the basis of wavelet transform, threshold parameters were improved and a more appropriate threshold expression was proposed. The discrete wavelet coefficients were processed using the improved threshold parameters, the accurate wavelet coefficients without noises were gained through inverse discrete wavelet transform, and then more original signal coefficients could be preserved. MIT-BIH arrythmia database was used to validate the method. Simulation results showed that the improved method could achieve better denoising effect than the traditional ones. PMID:25219225

  3. An Improved Neutron Transport Algorithm for HZETRN2006

    NASA Astrophysics Data System (ADS)

    Slaba, Tony

    NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.

  4. Improved RMR Rock Mass Classification Using Artificial Intelligence Algorithms

    NASA Astrophysics Data System (ADS)

    Gholami, Raoof; Rasouli, Vamegh; Alimoradi, Andisheh

    2013-09-01

    Rock mass classification systems such as rock mass rating (RMR) are very reliable means to provide information about the quality of rocks surrounding a structure as well as to propose suitable support systems for unstable regions. Many correlations have been proposed to relate measured quantities such as wave velocity to rock mass classification systems to limit the associated time and cost of conducting the sampling and mechanical tests conventionally used to calculate RMR values. However, these empirical correlations have been found to be unreliable, as they usually overestimate or underestimate the RMR value. The aim of this paper is to compare the results of RMR classification obtained from the use of empirical correlations versus machine-learning methodologies based on artificial intelligence algorithms. The proposed methods were verified based on two case studies located in northern Iran. Relevance vector regression (RVR) and support vector regression (SVR), as two robust machine-learning methodologies, were used to predict the RMR for tunnel host rocks. RMR values already obtained by sampling and site investigation at one tunnel were taken into account as the output of the artificial networks during training and testing phases. The results reveal that use of empirical correlations overestimates the predicted RMR values. RVR and SVR, however, showed more reliable results, and are therefore suggested for use in RMR classification for design purposes of rock structures.

  5. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  6. Research on aviation unsafe incidents classification with improved TF-IDF algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yanhua; Zhang, Zhiyuan; Huo, Weigang

    2016-05-01

    The text content of Aviation Safety Confidential Reports contains a large number of valuable information. Term frequency-inverse document frequency algorithm is commonly used in text analysis, but it does not take into account the sequential relationship of the words in the text and its role in semantic expression. According to the seven category labels of civil aviation unsafe incidents, aiming at solving the problems of TF-IDF algorithm, this paper improved TF-IDF algorithm based on co-occurrence network; established feature words extraction and words sequential relations for classified incidents. Aviation domain lexicon was used to improve the accuracy rate of classification. Feature words network model was designed for multi-documents unsafe incidents classification, and it was used in the experiment. Finally, the classification accuracy of improved algorithm was verified by the experiments.

  7. Ballistic target tracking algorithm based on improved particle filtering

    NASA Astrophysics Data System (ADS)

    Ning, Xiao-lei; Chen, Zhan-qi; Li, Xiao-yang

    2015-10-01

    Tracking ballistic re-entry target is a typical nonlinear filtering problem. In order to track the ballistic re-entry target in the nonlinear and non-Gaussian complex environment, a novel chaos map particle filter (CMPF) is used to estimate the target state. CMPF has better performance in application to estimate the state and parameter of nonlinear and non-Gassuian system. The Monte Carlo simulation results show that, this method can effectively solve particle degeneracy and particle impoverishment problem by improving the efficiency of particle sampling to obtain the better particles to part in estimation. Meanwhile CMPF can improve the state estimation precision and convergence velocity compared with EKF, UKF and the ordinary particle filter.

  8. AerGOM, an improved algorithm for stratospheric aerosol extinction retrieval from GOMOS observations - Part 1: Algorithm description

    NASA Astrophysics Data System (ADS)

    Vanhellemont, Filip; Mateshvili, Nina; Blanot, Laurent; Étienne Robert, Charles; Bingen, Christine; Sofieva, Viktoria; Dalaudier, Francis; Tétard, Cédric; Fussen, Didier; Dekemper, Emmanuel; Kyrölä, Erkki; Laine, Marko; Tamminen, Johanna; Zehner, Claus

    2016-09-01

    The GOMOS instrument on Envisat has successfully demonstrated that a UV-Vis-NIR spaceborne stellar occultation instrument is capable of delivering quality data on the gaseous and particulate composition of Earth's atmosphere. Still, some problems related to data inversion remained to be examined. In the past, it was found that the aerosol extinction profile retrievals in the upper troposphere and stratosphere are of good quality at a reference wavelength of 500 nm but suffer from anomalous, retrieval-related perturbations at other wavelengths. Identification of algorithmic problems and subsequent improvement was therefore necessary. This work has been carried out; the resulting AerGOM Level 2 retrieval algorithm together with the first data version AerGOMv1.0 forms the subject of this paper. The AerGOM algorithm differs from the standard GOMOS IPF processor in a number of important ways: more accurate physical laws have been implemented, all retrieval-related covariances are taken into account, and the aerosol extinction spectral model is strongly improved. Retrieval examples demonstrate that the previously observed profile perturbations have disappeared, and the obtained extinction spectra look in general more consistent. We present a detailed validation study in a companion paper; here, to give a first idea of the data quality, a worst-case comparison at 386 nm shows SAGE II-AerGOM correlation coefficients that are up to 1 order of magnitude larger than the ones obtained with the GOMOS IPFv6.01 data set.

  9. Visual Tracking Based on an Improved Online Multiple Instance Learning Algorithm.

    PubMed

    Wang, Li Jia; Zhang, Hua

    2016-01-01

    An improved online multiple instance learning (IMIL) for a visual tracking algorithm is proposed. In the IMIL algorithm, the importance of each instance contributing to a bag probability is with respect to their probabilities. A selection strategy based on an inner product is presented to choose weak classifier from a classifier pool, which avoids computing instance probabilities and bag probability M times. Furthermore, a feedback strategy is presented to update weak classifiers. In the feedback update strategy, different weights are assigned to the tracking result and template according to the maximum classifier score. Finally, the presented algorithm is compared with other state-of-the-art algorithms. The experimental results demonstrate that the proposed tracking algorithm runs in real-time and is robust to occlusion and appearance changes.

  10. Visual Tracking Based on an Improved Online Multiple Instance Learning Algorithm.

    PubMed

    Wang, Li Jia; Zhang, Hua

    2016-01-01

    An improved online multiple instance learning (IMIL) for a visual tracking algorithm is proposed. In the IMIL algorithm, the importance of each instance contributing to a bag probability is with respect to their probabilities. A selection strategy based on an inner product is presented to choose weak classifier from a classifier pool, which avoids computing instance probabilities and bag probability M times. Furthermore, a feedback strategy is presented to update weak classifiers. In the feedback update strategy, different weights are assigned to the tracking result and template according to the maximum classifier score. Finally, the presented algorithm is compared with other state-of-the-art algorithms. The experimental results demonstrate that the proposed tracking algorithm runs in real-time and is robust to occlusion and appearance changes. PMID:26843855

  11. Visual Tracking Based on an Improved Online Multiple Instance Learning Algorithm

    PubMed Central

    Wang, Li Jia; Zhang, Hua

    2016-01-01

    An improved online multiple instance learning (IMIL) for a visual tracking algorithm is proposed. In the IMIL algorithm, the importance of each instance contributing to a bag probability is with respect to their probabilities. A selection strategy based on an inner product is presented to choose weak classifier from a classifier pool, which avoids computing instance probabilities and bag probability M times. Furthermore, a feedback strategy is presented to update weak classifiers. In the feedback update strategy, different weights are assigned to the tracking result and template according to the maximum classifier score. Finally, the presented algorithm is compared with other state-of-the-art algorithms. The experimental results demonstrate that the proposed tracking algorithm runs in real-time and is robust to occlusion and appearance changes. PMID:26843855

  12. An improved recommendation algorithm via weakening indirect linkage effect

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Qiu, Tian; Shen, Xiao-Quan

    2015-07-01

    We propose an indirect-link-weakened mass diffusion method (IMD), by considering the indirect linkage and the source object heterogeneity effect in the mass diffusion (MD) recommendation method. Experimental results on the MovieLens, Netflix, and RYM datasets show that, the IMD method greatly improves both the recommendation accuracy and diversity, compared with a heterogeneity-weakened MD method (HMD), which only considers the source object heterogeneity. Moreover, the recommendation accuracy of the cold objects is also better elevated in the IMD than the HMD method. It suggests that eliminating the redundancy induced by the indirect linkages could have a prominent effect on the recommendation efficiency in the MD method. Project supported by the National Natural Science Foundation of China (Grant No. 11175079) and the Young Scientist Training Project of Jiangxi Province, China (Grant No. 20133BCB23017).

  13. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  14. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  15. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  16. Performance Improvement of Algorithms Based on the Synthetic Aperture Focusing Technique

    NASA Astrophysics Data System (ADS)

    Acevedo, P.; Sotomayor, A.; Moreno, E.

    An analysis to improve the performance of the ultrasonic synthetic aperture focusing technique (SAFT) on a PC platform is presented in this paper. Some useful processing techniques like apodization, dynamic focusing, envelope detection and image composition are used to improve the quality of the image. Finally, results of the algorithm implemented using MATLAB and C/C++ and the respective images are presented

  17. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  18. A strictly improving Phase 1 algorithm using least-squares subproblems

    SciTech Connect

    Leichner, S.A.; Dantzig, G.B.; Davis, J.W.

    1992-04-01

    Although the simplex method's performance in solving linear programming problems is usually quite good, it does not guarantee strict improvement at each iteration on degenerate problems. Instead of trying to recognize and avoid degenerate steps in the simplex method, we have developed a new Phase I algorithm that is completely impervious to degeneracy, with strict improvement attained at each iteration. It is also noted that the new Phase I algorithm is closely related to a number of existing algorithms. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase I algorithm were almost 3.5 times faster than the simplex method; on some problems, it was over 10 times faster.

  19. A strictly improving Phase 1 algorithm using least-squares subproblems

    SciTech Connect

    Leichner, S.A.; Dantzig, G.B.; Davis, J.W.

    1992-04-01

    Although the simplex method`s performance in solving linear programming problems is usually quite good, it does not guarantee strict improvement at each iteration on degenerate problems. Instead of trying to recognize and avoid degenerate steps in the simplex method, we have developed a new Phase I algorithm that is completely impervious to degeneracy, with strict improvement attained at each iteration. It is also noted that the new Phase I algorithm is closely related to a number of existing algorithms. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase I algorithm were almost 3.5 times faster than the simplex method; on some problems, it was over 10 times faster.

  20. An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints

    PubMed Central

    Rao, Yunqing; Qi, Dezhong; Li, Jinling

    2013-01-01

    For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem. PMID:24489491

  1. An Improved Proportionate Normalized Least-Mean-Square Algorithm for Broadband Multipath Channel Estimation

    PubMed Central

    2014-01-01

    To make use of the sparsity property of broadband multipath wireless communication channels, we mathematically propose an lp-norm-constrained proportionate normalized least-mean-square (LP-PNLMS) sparse channel estimation algorithm. A general lp-norm is weighted by the gain matrix and is incorporated into the cost function of the proportionate normalized least-mean-square (PNLMS) algorithm. This integration is equivalent to adding a zero attractor to the iterations, by which the convergence speed and steady-state performance of the inactive taps are significantly improved. Our simulation results demonstrate that the proposed algorithm can effectively improve the estimation performance of the PNLMS-based algorithm for sparse channel estimation applications. PMID:24782663

  2. An improved proportionate normalized least-mean-square algorithm for broadband multipath channel estimation.

    PubMed

    Li, Yingsong; Hamamura, Masanori

    2014-01-01

    To make use of the sparsity property of broadband multipath wireless communication channels, we mathematically propose an l p -norm-constrained proportionate normalized least-mean-square (LP-PNLMS) sparse channel estimation algorithm. A general l p -norm is weighted by the gain matrix and is incorporated into the cost function of the proportionate normalized least-mean-square (PNLMS) algorithm. This integration is equivalent to adding a zero attractor to the iterations, by which the convergence speed and steady-state performance of the inactive taps are significantly improved. Our simulation results demonstrate that the proposed algorithm can effectively improve the estimation performance of the PNLMS-based algorithm for sparse channel estimation applications.

  3. Improved algorithm for analysis of DNA sequences using multiresolution transformation.

    PubMed

    Inbamalar, T M; Sivakumar, R

    2015-01-01

    Bioinformatics and genomic signal processing use computational techniques to solve various biological problems. They aim to study the information allied with genetic materials such as the deoxyribonucleic acid (DNA), the ribonucleic acid (RNA), and the proteins. Fast and precise identification of the protein coding regions in DNA sequence is one of the most important tasks in analysis. Existing digital signal processing (DSP) methods provide less accurate and computationally complex solution with greater background noise. Hence, improvements in accuracy, computational complexity, and reduction in background noise are essential in identification of the protein coding regions in the DNA sequences. In this paper, a new DSP based method is introduced to detect the protein coding regions in DNA sequences. Here, the DNA sequences are converted into numeric sequences using electron ion interaction potential (EIIP) representation. Then discrete wavelet transformation is taken. Absolute value of the energy is found followed by proper threshold. The test is conducted using the data bases available in the National Centre for Biotechnology Information (NCBI) site. The comparative analysis is done and it ensures the efficiency of the proposed system.

  4. Improved inversion algorithms for near-surface characterization

    NASA Astrophysics Data System (ADS)

    Vaziri Astaneh, Ali; Guddati, Murthy N.

    2016-08-01

    Near-surface geophysical imaging is often performed by generating surface waves, and estimating the subsurface properties through inversion, that is, iteratively matching experimentally observed dispersion curves with predicted curves from a layered half-space model of the subsurface. Key to the effectiveness of inversion is the efficiency and accuracy of computing the dispersion curves and their derivatives. This paper presents improved methodologies for both dispersion curve and derivative computation. First, it is shown that the dispersion curves can be computed more efficiently by combining an unconventional complex-length finite element method (CFEM) to model the finite depth layers, with perfectly matched discrete layers (PMDL) to model the unbounded half-space. Second, based on analytical derivatives for theoretical dispersion curves, an approximate derivative is derived for the so-called effective dispersion curve for realistic geophysical surface response data. The new derivative computation has a smoothing effect on the computation of derivatives, in comparison with traditional finite difference (FD) approach, and results in faster convergence. In addition, while the computational cost of FD differentiation is proportional to the number of model parameters, the new differentiation formula has a computational cost that is almost independent of the number of model parameters. At the end, as confirmed by synthetic and real-life imaging examples, the combination of CFEM + PMDL for dispersion calculation and the new differentiation formula results in more accurate estimates of the subsurface characteristics than the traditional methods, at a small fraction of computational effort.

  5. Improved algorithm for analysis of DNA sequences using multiresolution transformation.

    PubMed

    Inbamalar, T M; Sivakumar, R

    2015-01-01

    Bioinformatics and genomic signal processing use computational techniques to solve various biological problems. They aim to study the information allied with genetic materials such as the deoxyribonucleic acid (DNA), the ribonucleic acid (RNA), and the proteins. Fast and precise identification of the protein coding regions in DNA sequence is one of the most important tasks in analysis. Existing digital signal processing (DSP) methods provide less accurate and computationally complex solution with greater background noise. Hence, improvements in accuracy, computational complexity, and reduction in background noise are essential in identification of the protein coding regions in the DNA sequences. In this paper, a new DSP based method is introduced to detect the protein coding regions in DNA sequences. Here, the DNA sequences are converted into numeric sequences using electron ion interaction potential (EIIP) representation. Then discrete wavelet transformation is taken. Absolute value of the energy is found followed by proper threshold. The test is conducted using the data bases available in the National Centre for Biotechnology Information (NCBI) site. The comparative analysis is done and it ensures the efficiency of the proposed system. PMID:26000337

  6. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem

    PubMed Central

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA. PMID:26167171

  7. Improvement of wavelet threshold filtered back-projection image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-11-01

    Image reconstruction technique has been applied into many fields including some medical imaging, such as X ray computer tomography (X-CT), positron emission tomography (PET) and nuclear magnetic resonance imaging (MRI) etc, but the reconstructed effects are still not satisfied because original projection data are inevitably polluted by noises in process of image reconstruction. Although some traditional filters e.g., Shepp-Logan (SL) and Ram-Lak (RL) filter have the ability to filter some noises, Gibbs oscillation phenomenon are generated and artifacts leaded by back-projection are not greatly improved. Wavelet threshold denoising can overcome the noises interference to image reconstruction. Since some inherent defects exist in the traditional soft and hard threshold functions, an improved wavelet threshold function combined with filtered back-projection (FBP) algorithm was proposed in this paper. Four different reconstruction algorithms were compared in simulated experiments. Experimental results demonstrated that this improved algorithm greatly eliminated the shortcomings of un-continuity and large distortion of traditional threshold functions and the Gibbs oscillation. Finally, the availability of this improved algorithm was verified from the comparison of two evaluation criterions, i.e. mean square error (MSE), peak signal to noise ratio (PSNR) among four different algorithms, and the optimum dual threshold values of improved wavelet threshold function was gotten.

  8. Improved artificial bee colony algorithm for wavefront sensor-less system in free space optical communication

    NASA Astrophysics Data System (ADS)

    Niu, Chaojun; Han, Xiang'e.

    2015-10-01

    Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.

  9. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    NASA Astrophysics Data System (ADS)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  10. Improved Fault Classification in Series Compensated Transmission Line: Comparative Evaluation of Chebyshev Neural Network Training Algorithms.

    PubMed

    Vyas, Bhargav Y; Das, Biswarup; Maheshwari, Rudra Prakash

    2016-08-01

    This paper presents the Chebyshev neural network (ChNN) as an improved artificial intelligence technique for power system protection studies and examines the performances of two ChNN learning algorithms for fault classification of series compensated transmission line. The training algorithms are least-square Levenberg-Marquardt (LSLM) and recursive least-square algorithm with forgetting factor (RLSFF). The performances of these algorithms are assessed based on their generalization capability in relating the fault current parameters with an event of fault in the transmission line. The proposed algorithm is fast in response as it utilizes postfault samples of three phase currents measured at the relaying end corresponding to half-cycle duration only. After being trained with only a small part of the generated fault data, the algorithms have been tested over a large number of fault cases with wide variation of system and fault parameters. Based on the studies carried out in this paper, it has been found that although the RLSFF algorithm is faster for training the ChNN in the fault classification application for series compensated transmission lines, the LSLM algorithm has the best accuracy in testing. The results prove that the proposed ChNN-based method is accurate, fast, easy to design, and immune to the level of compensations. Thus, it is suitable for digital relaying applications.

  11. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    SciTech Connect

    Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-11-15

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

  12. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    PubMed Central

    Sidky, Emil Y.; Pan, Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-01-01

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness whenp=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging. PMID:19994501

  13. Spectrum parameter estimation in Brillouin scattering distributed temperature sensor based on cuckoo search algorithm combined with the improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Yu, Chunjuan; Fu, Xinghu; Liu, Wenzhe; Bi, Weihong

    2015-12-01

    In the distributed optical fiber sensing system based on Brillouin scattering, strain and temperature are the main measuring parameters which can be obtained by analyzing the Brillouin center frequency shift. The novel algorithm which combines the cuckoo search algorithm (CS) with the improved differential evolution (IDE) algorithm is proposed for the Brillouin scattering parameter estimation. The CS-IDE algorithm is compared with CS algorithm and analyzed in different situation. The results show that both the CS and CS-IDE algorithm have very good convergence. The analysis reveals that the CS-IDE algorithm can extract the scattering spectrum features with different linear weight ratio, linewidth combination and SNR. Moreover, the BOTDR temperature measuring system based on electron optical frequency shift is set up to verify the effectiveness of the CS-IDE algorithm. Experimental results show that there is a good linear relationship between the Brillouin center frequency shift and temperature changes.

  14. Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm.

    PubMed

    Ffrench, P A; Zeidler, J H; Ku, W H

    1997-01-01

    Two-dimensional (2-D) adaptive filtering is a technique that can be applied to many image processing applications. This paper will focus on the development of an improved 2-D adaptive lattice algorithm (2-D AL) and its application to the removal of correlated clutter to enhance the detectability of small objects in images. The two improvements proposed here are increased flexibility in the calculation of the reflection coefficients and a 2-D method to update the correlations used in the 2-D AL algorithm. The 2-D AL algorithm is shown to predict correlated clutter in image data and the resulting filter is compared with an ideal Wiener-Hopf filter. The results of the clutter removal will be compared to previously published ones for a 2-D least mean square (LMS) algorithm. 2-D AL is better able to predict spatially varying clutter than the 2-D LMS algorithm, since it converges faster to new image properties. Examples of these improvements are shown for a spatially varying 2-D sinusoid in white noise and simulated clouds. The 2-D LMS and 2-D AL algorithms are also shown to enhance a mammogram image for the detection of small microcalcifications and stellate lesions.

  15. An Improved Algorithm for Retrieving Surface Downwelling Longwave Radiation from Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.

    2006-01-01

    Retrieving surface longwave radiation from space has been a difficult task since the surface downwelling longwave radiation (SDLW) are integrations from radiation emitted by the entire atmosphere, while those emitted from the upper atmosphere are absorbed before reaching the surface. It is particularly problematic when thick clouds are present since thick clouds will virtually block all the longwave radiation from above, while satellites observe atmosphere emissions mostly from above the clouds. Zhou and Cess developed an algorithm for retrieving SDLW based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for areas that were covered with ice clouds. An improved version of the algorithm was developed that prevents the large errors in the SDLW at low water vapor amounts. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths measured from the Cloud and the Earth's Radiant Energy System (CERES) satellites to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for the Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing. It will be incorporated in the CERES project as one of the empirical surface radiation algorithms.

  16. Microcellular propagation prediction model based on an improved ray tracing algorithm.

    PubMed

    Liu, Z-Y; Guo, L-X; Fan, T-Q

    2013-11-01

    Two-dimensional (2D)/two-and-one-half-dimensional ray tracing (RT) algorithms for the use of the uniform theory of diffraction and geometrical optics are widely used for channel prediction in urban microcellular environments because of their high efficiency and reliable prediction accuracy. In this study, an improved RT algorithm based on the "orientation face set" concept and on the improved 2D polar sweep algorithm is proposed. The goal is to accelerate point-to-point prediction, thereby making RT prediction attractive and convenient. In addition, the use of threshold control of each ray path and the handling of visible grid points for reflection and diffraction sources are adopted, resulting in an improved efficiency of coverage prediction over large areas. Measured results and computed predictions are also compared for urban scenarios. The results indicate that the proposed prediction model works well and is a useful tool for microcellular communication applications.

  17. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation

    PubMed Central

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713

  18. Improved Quantum Artificial Fish Algorithm Application to Distributed Network Considering Distributed Generation.

    PubMed

    Du, Tingsong; Hu, Yang; Ke, Xianting

    2015-01-01

    An improved quantum artificial fish swarm algorithm (IQAFSA) for solving distributed network programming considering distributed generation is proposed in this work. The IQAFSA based on quantum computing which has exponential acceleration for heuristic algorithm uses quantum bits to code artificial fish and quantum revolving gate, preying behavior, and following behavior and variation of quantum artificial fish to update the artificial fish for searching for optimal value. Then, we apply the proposed new algorithm, the quantum artificial fish swarm algorithm (QAFSA), the basic artificial fish swarm algorithm (BAFSA), and the global edition artificial fish swarm algorithm (GAFSA) to the simulation experiments for some typical test functions, respectively. The simulation results demonstrate that the proposed algorithm can escape from the local extremum effectively and has higher convergence speed and better accuracy. Finally, applying IQAFSA to distributed network problems and the simulation results for 33-bus radial distribution network system show that IQAFSA can get the minimum power loss after comparing with BAFSA, GAFSA, and QAFSA. PMID:26447713

  19. Weighted sequence motifs as an improved seeding step in microRNA target prediction algorithms.

    PubMed

    Saetrom, Ola; Snøve, Ola; Saetrom, Pål

    2005-07-01

    We present a new microRNA target prediction algorithm called TargetBoost, and show that the algorithm is stable and identifies more true targets than do existing algorithms. TargetBoost uses machine learning on a set of validated microRNA targets in lower organisms to create weighted sequence motifs that capture the binding characteristics between microRNAs and their targets. Existing algorithms require candidates to have (1) near-perfect complementarity between microRNAs' 5' end and their targets; (2) relatively high thermodynamic duplex stability; (3) multiple target sites in the target's 3' UTR; and (4) evolutionary conservation of the target between species. Most algorithms use one of the two first requirements in a seeding step, and use the three others as filters to improve the method's specificity. The initial seeding step determines an algorithm's sensitivity and also influences its specificity. As all algorithms may add filters to increase the specificity, we propose that methods should be compared before such filtering. We show that TargetBoost's weighted sequence motif approach is favorable to using both the duplex stability and the sequence complementarity steps. (TargetBoost is available as a Web tool from http://www.interagon.com/demo/.).

  20. ULTRASONIC IMAGING USING A FLEXIBLE ARRAY: IMPROVEMENTS TO THE MAXIMUM CONTRAST AUTOFOCUS ALGORITHM

    SciTech Connect

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2009-03-03

    In previous work, we have presented the maximum contrast autofocus algorithm for estimating unknown imaging parameters, e.g., for imaging through complicated surfaces using a flexible ultrasonic array. This paper details recent improvements to the algorithm. The algorithm operates by maximizing the image contrast metric with respect to the imaging parameters. For a flexible array, the relative positions of the array elements are parameterized using a cubic spline function and the spline control points are estimated by iterative maximisation of the image contrast via simulated annealing. The resultant spline gives an estimate of the array geometry and the profile of the surface that it has conformed to, allowing the generation of a well-focused image. A pre-processing step is introduced to obtain an initial estimate of the array geometry, reducing the time taken for the algorithm to convergence. Experimental results are demonstrated using a flexible array prototype.

  1. Does videothoracoscopy improve clinical outcomes when implemented as part of a pleural empyema treatment algorithm?

    PubMed Central

    Terra, Ricardo Mingarini; Waisberg, Daniel Reis; de Almeida, José Luiz Jesus; Devido, Marcela Santana; Pêgo-Fernandes, Paulo Manuel; Jatene, Fabio Biscegli

    2012-01-01

    OBJECTIVE: We aimed to evaluate whether the inclusion of videothoracoscopy in a pleural empyema treatment algorithm would change the clinical outcome of such patients. METHODS: This study performed quality-improvement research. We conducted a retrospective review of patients who underwent pleural decortication for pleural empyema at our institution from 2002 to 2008. With the old algorithm (January 2002 to September 2005), open decortication was the procedure of choice, and videothoracoscopy was only performed in certain sporadic mid-stage cases. With the new algorithm (October 2005 to December 2008), videothoracoscopy became the first-line treatment option, whereas open decortication was only performed in patients with a thick pleural peel (>2 cm) observed by chest scan. The patients were divided into an old algorithm (n = 93) and new algorithm (n = 113) group and compared. The main outcome variables assessed included treatment failure (pleural space reintervention or death up to 60 days after medical discharge) and the occurrence of complications. RESULTS: Videothoracoscopy and open decortication were performed in 13 and 80 patients from the old algorithm group and in 81 and 32 patients from the new algorithm group, respectively (p<0.01). The patients in the new algorithm group were older (41±1 vs. 46.3±16.7 years, p = 0.014) and had higher Charlson Comorbidity Index scores [0(0-3) vs. 2(0-4), p = 0.032]. The occurrence of treatment failure was similar in both groups (19.35% vs. 24.77%, p = 0.35), although the complication rate was lower in the new algorithm group (48.3% vs. 33.6%, p = 0.04). CONCLUSIONS: The wider use of videothoracoscopy in pleural empyema treatment was associated with fewer complications and unaltered rates of mortality and reoperation even though more severely ill patients were subjected to videothoracoscopic surgery. PMID:22760892

  2. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    PubMed Central

    He, Baochun; Ma, Zhiyuan; Zong, Mao; Zhou, Xiangrong; Fujita, Hiroshi

    2013-01-01

    A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method. PMID:24066017

  3. Combined image-processing algorithms for improved optical coherence tomography of prostate nerves

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Weldon, Thomas P.; Fiddy, Michael A.; Fried, Nathaniel M.

    2010-07-01

    Cavernous nerves course along the surface of the prostate gland and are responsible for erectile function. These nerves are at risk of injury during surgical removal of a cancerous prostate gland. In this work, a combination of segmentation, denoising, and edge detection algorithms are applied to time-domain optical coherence tomography (OCT) images of rat prostate to improve identification of cavernous nerves. First, OCT images of the prostate are segmented to differentiate the cavernous nerves from the prostate gland. Then, a locally adaptive denoising algorithm using a dual-tree complex wavelet transform is applied to reduce speckle noise. Finally, edge detection is used to provide deeper imaging of the prostate gland. Combined application of these three algorithms results in improved signal-to-noise ratio, imaging depth, and automatic identification of the cavernous nerves, which may be of direct benefit for use in laparoscopic and robotic nerve-sparing prostate cancer surgery.

  4. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  5. A coincidence detection algorithm for improving detection rates in coulomb explosion imaging

    NASA Astrophysics Data System (ADS)

    Wales, Benji; Bisson, Eric; Karimi, Reza; Kieffer, Jean-Claude; Légaré, Francois; Sanderson, Joseph

    2012-03-01

    A scheme for determining true coincidence events in Coulomb Explosion Imaging experiments is reported and compared with a simple design used in recently published work. The new scheme is able to identify any possible coincidence without the use of a priori knowledge of the fragmentation mechanism. Using experimental data from the triatomic molecule OCS, the advanced algorithm is shown to improve acquisition yield by a factor of between 2 and 6 depending on the amount of a priori knowledge included in the simple design search. Monte Carlo simulations for both systems suggest that detection yield can be improved by increasing the number of molecules in the laser focus from the standard ≤1 up to 3.5 and employing the advanced algorithm. Count rates for larger molecules would be preferentially improved with the rate for 6 atom molecules improved by a factor of up to five.

  6. An improved bi-level algorithm for partitioning dynamic grid hierarchies.

    SciTech Connect

    Deiterding, Ralf (California Institute of Technology, Pasadena, CA); Johansson, Henrik (Uppsala University, Uppsala, Sweden); Steensland, Johan; Ray, Jaideep

    2006-05-01

    Structured adaptive mesh refinement methods are being widely used for computer simulations of various physical phenomena. Parallel implementations potentially offer realistic simulations of complex three-dimensional applications. But achieving good scalability for large-scale applications is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Designed on sound SAMR principles, Nature+Fable is a hybrid, dedicated SAMR partitioning tool that brings together the advantages of both domain-based and patch-based techniques while avoiding their drawbacks. But the original bi-level partitioning approach in Nature+Fable is insufficient as it for realistic applications regards frequently occurring bi-levels as ''impossible'' and fails. This document describes an improved bi-level partitioning algorithm that successfully copes with all possible bi-levels. The improved algorithm uses the original approach side-by-side with a new, complementing approach. By using a new, customized classification method, the improved algorithm switches automatically between the two approaches. This document describes the algorithms, discusses implementation issues, and presents experimental results. The improved version of Nature+Fable was found to be able to handle realistic applications and also to generate less imbalances, similar box count, but more communication as compared to the native, domain-based partitioner in the SAMR framework AMROC.

  7. Integrating soil information into canopy sensor algorithms for improved corn nitrogen rate recommendation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop canopy sensors have proven effective at determining site-specific nitrogen (N) needs, but several Midwest states use different algorithms to predict site-specific N need. The objective of this research was to determine if soil information can be used to improve the Missouri canopy sensor algori...

  8. Improved progressive TIN densification filtering algorithm for airborne LiDAR data in forested areas

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoqian; Guo, Qinghua; Su, Yanjun; Xue, Baolin

    2016-07-01

    Filtering of light detection and ranging (LiDAR) data into the ground and non-ground points is a fundamental step in processing raw airborne LiDAR data. This paper proposes an improved progressive triangulated irregular network (TIN) densification (IPTD) filtering algorithm that can cope with a variety of forested landscapes, particularly both topographically and environmentally complex regions. The IPTD filtering algorithm consists of three steps: (1) acquiring potential ground seed points using the morphological method; (2) obtaining accurate ground seed points; and (3) building a TIN-based model and iteratively densifying TIN. The IPTD filtering algorithm was tested in 15 forested sites with various terrains (i.e., elevation and slope) and vegetation conditions (i.e., canopy cover and tree height), and was compared with seven other commonly used filtering algorithms (including morphology-based, slope-based, and interpolation-based filtering algorithms). Results show that the IPTD achieves the highest filtering accuracy for nine of the 15 sites. In general, it outperforms the other filtering algorithms, yielding the lowest average total error of 3.15% and the highest average kappa coefficient of 89.53%.

  9. Combining spatial and spectral information to improve crop/weed discrimination algorithms

    NASA Astrophysics Data System (ADS)

    Yan, L.; Jones, G.; Villette, S.; Paoli, J. N.; Gée, C.

    2012-01-01

    Reduction of herbicide spraying is an important key to environmentally and economically improve weed management. To achieve this, remote sensors such as imaging systems are commonly used to detect weed plants. We developed spatial algorithms that detect the crop rows to discriminate crop from weeds. These algorithms have been thoroughly tested and provide robust and accurate results without learning process but their detection is limited to inter-row areas. Crop/Weed discrimination using spectral information is able to detect intra-row weeds but generally needs a prior learning process. We propose a method based on spatial and spectral information to enhance the discrimination and overcome the limitations of both algorithms. The classification from the spatial algorithm is used to build the training set for the spectral discrimination method. With this approach we are able to improve the range of weed detection in the entire field (inter and intra-row). To test the efficiency of these algorithms, a relevant database of virtual images issued from SimAField model has been used and combined to LOPEX93 spectral database. The developed method based is evaluated and compared with the initial method in this paper and shows an important enhancement from 86% of weed detection to more than 95%.

  10. Improving the Interpretability of Classification Rules Discovered by an Ant Colony Algorithm: Extended Results.

    PubMed

    Otero, Fernando E B; Freitas, Alex A

    2016-01-01

    Most ant colony optimization (ACO) algorithms for inducing classification rules use a ACO-based procedure to create a rule in a one-at-a-time fashion. An improved search strategy has been proposed in the cAnt-Miner[Formula: see text] algorithm, where an ACO-based procedure is used to create a complete list of rules (ordered rules), i.e., the ACO search is guided by the quality of a list of rules instead of an individual rule. In this paper we propose an extension of the cAnt-Miner[Formula: see text] algorithm to discover a set of rules (unordered rules). The main motivations for this work are to improve the interpretation of individual rules by discovering a set of rules and to evaluate the impact on the predictive accuracy of the algorithm. We also propose a new measure to evaluate the interpretability of the discovered rules to mitigate the fact that the commonly used model size measure ignores how the rules are used to make a class prediction. Comparisons with state-of-the-art rule induction algorithms, support vector machines, and the cAnt-Miner[Formula: see text] producing ordered rules are also presented.

  11. An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts.

    PubMed

    Jiang, Shouyong; Yang, Shengxiang

    2016-02-01

    The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.

  12. An Improved Performance Frequency Estimation Algorithm for Passive Wireless SAW Resonant Sensors

    PubMed Central

    Liu, Boquan; Zhang, Chenrui; Ji, Xiaojun; Chen, Jing; Han, Tao

    2014-01-01

    Passive wireless surface acoustic wave (SAW) resonant sensors are suitable for applications in harsh environments. The traditional SAW resonant sensor system requires, however, Fourier transformation (FT) which has a resolution restriction and decreases the accuracy. In order to improve the accuracy and resolution of the measurement, the singular value decomposition (SVD)-based frequency estimation algorithm is applied for wireless SAW resonant sensor responses, which is a combination of a single tone undamped and damped sinusoid signal with the same frequency. Compared with the FT algorithm, the accuracy and the resolution of the method used in the self-developed wireless SAW resonant sensor system are validated. PMID:25429410

  13. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Huibin, Liu; Jun, Zhang

    2016-04-01

    Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network.

  14. An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation

    NASA Technical Reports Server (NTRS)

    Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.

    2000-01-01

    A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.

  15. Operationality Improvement Control of Electric Power Assisted Wheelchair by Fuzzy Algorithm Considering Posture Angle

    NASA Astrophysics Data System (ADS)

    Murakami, Hiroki; Seki, Hirokazu; Minakata, Hideaki; Tadakuma, Susumu

    This paper describes a novel operationality improvement control for electric power assisted wheelchairs. “Electric power assisted wheelchair” which assists the driving force by electric motors is expected to be widely used as a mobility support system for elderly people and disabled people, however, the performance of the straight and circular road driving must be further improved because the two wheels drive independently. This paper proposes a novel operationality improvement control by fuzzy algorithm to realize the stable driving on straight and circular roads. The suitable assisted torque of the right and left wheels is determined by fuzzy algorithm based on the posture angular velocity, the posture angle of the wheelchair, the human input torque proportion and the total human torque of the right and left wheels. Some experiments on the practical roads show the effectiveness of the proposed control system.

  16. Improved location algorithm for multiple intrusions in distributed Sagnac fiber sensing system.

    PubMed

    Wang, He; Sun, Qizhen; Li, Xiaolei; Wo, Jianghai; Shum, Perry Ping; Liu, Deming

    2014-04-01

    An improved algorithm named "twice-FFT" for multi-point intrusion location in distributed Sagnac sensing system is proposed and demonstrated. To find the null-frequencies more accurately and efficiently, a second FFT is applied to the frequency spectrum of the phase signal caused by intrusion. After Gaussian fitting and searching the peak response frequency in the twice-FFT curve, the intrusion position could be calculated out stably. Meanwhile, the twice-FFT algorithm could solve the problem of multi-point intrusion location. Based on the experiment with twice-FFT algorithm, the location error less than 100m for single intrusion is achieved at any position along the total length of 41km, and the locating ability for two or three intrusions occurring simultaneously is also demonstrated. PMID:24718133

  17. Improved particle swarm optimization algorithm for android medical care IOT using modified parameters.

    PubMed

    Sung, Wen-Tsai; Chiang, Yen-Chun

    2012-12-01

    This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services.

  18. Improved particle swarm optimization algorithm for android medical care IOT using modified parameters.

    PubMed

    Sung, Wen-Tsai; Chiang, Yen-Chun

    2012-12-01

    This study examines wireless sensor network with real-time remote identification using the Android study of things (HCIOT) platform in community healthcare. An improved particle swarm optimization (PSO) method is proposed to efficiently enhance physiological multi-sensors data fusion measurement precision in the Internet of Things (IOT) system. Improved PSO (IPSO) includes: inertia weight factor design, shrinkage factor adjustment to allow improved PSO algorithm data fusion performance. The Android platform is employed to build multi-physiological signal processing and timely medical care of things analysis. Wireless sensor network signal transmission and Internet links allow community or family members to have timely medical care network services. PMID:22492176

  19. Experimental verification of an interpolation algorithm for improved estimates of animal position

    NASA Astrophysics Data System (ADS)

    Schell, Chad; Jaffe, Jules S.

    2004-07-01

    This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied ``ex post facto'' to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.

  20. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  1. Experimental verification of an interpolation algorithm for improved estimates of animal position.

    PubMed

    Schell, Chad; Jaffe, Jules S

    2004-07-01

    This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied "ex post facto" to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.

  2. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  3. Improvements to a five-phase ABS algorithm for experimental validation

    NASA Astrophysics Data System (ADS)

    Gerard, Mathieu; Pasillas-Lépine, William; de Vries, Edwin; Verhaegen, Michel

    2012-10-01

    The anti-lock braking system (ABS) is the most important active safety system for passenger cars. Unfortunately, the literature is not really precise about its description, stability and performance. This research improves a five-phase hybrid ABS control algorithm based on wheel deceleration [W. Pasillas-Lépine, Hybrid modeling and limit cycle analysis for a class of five-phase anti-lock brake algorithms, Veh. Syst. Dyn. 44 (2006), pp. 173-188] and validates it on a tyre-in-the-loop laboratory facility. Five relevant effects are modelled so that the simulation matches the reality: oscillations in measurements, wheel acceleration reconstruction, brake pressure dynamics, brake efficiency changes and tyre relaxation. The time delays in measurement and actuation have been identified as the main difficulty for the initial algorithm to work in practice. Three methods are proposed in order to deal with these delays. It is verified that the ABS limit cycles encircle the optimal braking point, without assuming any tyre parameter being a priori known. The ABS algorithm is compared with the commercial algorithm developed by Bosch.

  4. Reasons why current speech-enhancement algorithms do not improve speech intelligibility and suggested solutions

    PubMed Central

    Loizou, Philipos C.; Kim, Gibak

    2011-01-01

    Existing speech enhancement algorithms can improve speech quality but not speech intelligibility, and the reasons for that are unclear. In the present paper, we present a theoretical framework that can be used to analyze potential factors that can influence the intelligibility of processed speech. More specifically, this framework focuses on the fine-grain analysis of the distortions introduced by speech enhancement algorithms. It is hypothesized that if these distortions are properly controlled, then large gains in intelligibility can be achieved. To test this hypothesis, intelligibility tests are conducted with human listeners in which we present processed speech with controlled speech distortions. The aim of these tests is to assess the perceptual effect of the various distortions that can be introduced by speech enhancement algorithms on speech intelligibility. Results with three different enhancement algorithms indicated that certain distortions are more detrimental to speech intelligibility degradation than others. When these distortions were properly controlled, however, large gains in intelligibility were obtained by human listeners, even by spectral-subtractive algorithms which are known to degrade speech quality and intelligibility. PMID:21909285

  5. Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas

    NASA Technical Reports Server (NTRS)

    Smith, Barbara M.; Bennett, Sean

    1992-01-01

    A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.

  6. Improvements in the DOAS Based Total Ozone Column Algorithm for OMI

    NASA Astrophysics Data System (ADS)

    de Haan, J. F.; Veefkind, J. P.; Valks, P.; Brinksma, E.; Levelt, P. F.

    2003-12-01

    The Ozone Monitoring Instrument is a nadir pointing imaging spectrometer with a wide swath (about 2600 km) that records reflected radiance spectra in the wavelength range 270-500 nm with a spectral resolution of about 0.5 nm. The high spatial resolution (13x24 km at nadir) makes it possible to obtain information on tropospheric ozone as problems due to (partly) cloudy pixels are reduced compared with instruments like GOME and SCIAMACHY. OMI is scheduled for launch early 2004 as part of the NASA EOS-Aura mission. To obtain accurate total ozone columns from OMI spectra an improved DOAS based algorithm is used as compared to the algorithm used in the operational processor for GOME and SCIAMACHY data. The following improvements have been implemented. First, the DOAS fit window is changed from 325-335 nm to 331.6 - 336.6 nm, making the retrieved columns less sensitive to the temperature and ozone profile. To further improve the accuracy we adopt a so-called empirical procedure to calculate air mass factors. In this procedure the DOAS method is applied to simulated spectra. These air mass factors are exact if the atmospheric model used in the calculations corresponds to the actual atmosphere. The third improvement is that the air mass factor is regarded as a function of the slant column density that results from the DOAS fit of a measured spectrum. These improvements have been implemented in the operational algorithm for OMI. We are currently investigating further improvements by handling rotational Raman scattering in a more advanced manner. In this poster presentation the improvements are discussed and some results based on GOME spectra will be presented.

  7. Validation and Improvement of CERES Surface Radiation Budget Algorithms: Extension of Dusty and Cloudy Scenes

    NASA Technical Reports Server (NTRS)

    Ramanathan, V.; Inamdar, Anand K.

    2005-01-01

    Our main task was to validate and improve the generation of surface long wave fluxes from the CERES TOA window channel flux measurements. We completed this task successfully for the clear sky fluxes in the presence of aerosols including dust during the first year of the project. The algorithm we developed for CERES was remarkably successful for clear sky fluxes and we have no further tasks that need to be performed past the requested termination date of December 31, 2004. We found that the information contained in the TOA fluxes was not sufficient to improve upon the current CERES algorithm for cloudy sky fluxes. Given this development and given our success in clear sky fluxes, we do not see any reason to continue our validation work beyond what we have completed. Specific details are given.

  8. An improved real-time endovascular guidewire position simulation using shortest path algorithm.

    PubMed

    Qiu, Jianpeng; Qu, Zhiyi; Qiu, Haiquan; Zhang, Xiaomin

    2016-09-01

    In this study, we propose a new graph-theoretical method to simulate guidewire paths inside the carotid artery. The minimum energy guidewire path can be obtained by applying the shortest path algorithm, such as Dijkstra's algorithm for graphs, based on the principle of the minimal total energy. Compared to previous results, experiments of three phantoms were validated, revealing that the first and second phantoms overlap completely between simulated and real guidewires. In addition, 95 % of the third phantom overlaps completely, and the remaining 5 % closely coincides. The results demonstrate that our method achieves 87 and 80 % improvements for the first and third phantoms under the same conditions, respectively. Furthermore, 91 % improvements were obtained for the second phantom under the condition with reduced graph construction complexity.

  9. An improved real-time endovascular guidewire position simulation using shortest path algorithm.

    PubMed

    Qiu, Jianpeng; Qu, Zhiyi; Qiu, Haiquan; Zhang, Xiaomin

    2016-09-01

    In this study, we propose a new graph-theoretical method to simulate guidewire paths inside the carotid artery. The minimum energy guidewire path can be obtained by applying the shortest path algorithm, such as Dijkstra's algorithm for graphs, based on the principle of the minimal total energy. Compared to previous results, experiments of three phantoms were validated, revealing that the first and second phantoms overlap completely between simulated and real guidewires. In addition, 95 % of the third phantom overlaps completely, and the remaining 5 % closely coincides. The results demonstrate that our method achieves 87 and 80 % improvements for the first and third phantoms under the same conditions, respectively. Furthermore, 91 % improvements were obtained for the second phantom under the condition with reduced graph construction complexity. PMID:26467345

  10. Improved sampling and validation of frozen Gaussian approximation with surface hopping algorithm for nonadiabatic dynamics

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Zhou, Zhennan

    2016-09-01

    In the spirit of the fewest switches surface hopping, the frozen Gaussian approximation with surface hopping (FGA-SH) method samples a path integral representation of the non-adiabatic dynamics in the semiclassical regime. An improved sampling scheme is developed in this work for FGA-SH based on birth and death branching processes. The algorithm is validated for the standard test examples of non-adiabatic dynamics.

  11. An improved atmospheric correction algorithm for applying MERIS data to very turbid inland waters

    NASA Astrophysics Data System (ADS)

    Jaelani, Lalu Muhamad; Matsushita, Bunkei; Yang, Wei; Fukushima, Takehiko

    2015-07-01

    Atmospheric correction (AC) is a necessary process when quantitatively monitoring water quality parameters from satellite data. However, it is still a major challenge to carry out AC for turbid coastal and inland waters. In this study, we propose an improved AC algorithm named N-GWI (new standard Gordon and Wang's algorithms with an iterative process and a bio-optical model) for applying MERIS data to very turbid inland waters (i.e., waters with a water-leaving reflectance at 864.8 nm between 0.001 and 0.01). The N-GWI algorithm incorporates three improvements to avoid certain invalid assumptions that limit the applicability of the existing algorithms in very turbid inland waters. First, the N-GWI uses a fixed aerosol type (coastal aerosol) but permits aerosol concentration to vary at each pixel; this improvement omits a complicated requirement for aerosol model selection based only on satellite data. Second, it shifts the reference band from 670 nm to 754 nm to validate the assumption that the total absorption coefficient at the reference band can be replaced by that of pure water, and thus can avoid the uncorrected estimation of the total absorption coefficient at the reference band in very turbid waters. Third, the N-GWI generates a semi-analytical relationship instead of an empirical one for estimation of the spectral slope of particle backscattering. Our analysis showed that the N-GWI improved the accuracy of atmospheric correction in two very turbid Asian lakes (Lake Kasumigaura, Japan and Lake Dianchi, China), with a normalized mean absolute error (NMAE) of less than 22% for wavelengths longer than 620 nm. However, the N-GWI exhibited poor performance in moderately turbid waters (the NMAE values were larger than 83.6% in the four American coastal waters). The applicability of the N-GWI, which includes both advantages and limitations, was discussed.

  12. Improved estimates of boreal Fire Radiative Energy using high temporal resolution data and a modified active fire detection algorithm

    NASA Astrophysics Data System (ADS)

    Barrett, Kirsten

    2016-04-01

    Reliable estimates of biomass combusted during wildfires can be obtained from satellite observations of fire radiative power (FRP). Total fire radiative energy (FRE) is typically estimated by integrating instantaneous measurements of fire radiative power (FRP) at the time of orbital satellite overpass or geostationary observation. Remotely-sensed FRP products from orbital satellites are usually global in extent, requiring several thresholding and filtering operations to reduce the number of false fire detections. Some filters required for a global product may not be appropriate to fire detection in the boreal forest resulting in errors of omission and increased data processing times. We evaluate the effect of a boreal-specific active fire detection algorithm and estimates of FRP/FRE. Boreal fires are more likely to escape detection due to lower intensity smouldering combustion and sub canopy fires, therefore improvements in boreal fire detection could substantially reduce the uncertainty of emissions from biomass combustion in the region. High temporal resolution data from geostationary satellites have led to improvements in FRE estimation in tropical and temperate forests, but such a perspective is not possible for high latitude ecosystems given the equatorial orbit of geostationary observation. The increased density of overpasses in high latitudes from polar-orbiting satellites, however, may provide adequate temporal sampling for estimating FRE.

  13. An improved scheduling algorithm for 3D cluster rendering with platform LSF

    NASA Astrophysics Data System (ADS)

    Xu, Wenli; Zhu, Yi; Zhang, Liping

    2013-10-01

    High-quality photorealistic rendering of 3D modeling needs powerful computing systems. On this demand highly efficient management of cluster resources develops fast to exert advantages. This paper is absorbed in the aim of how to improve the efficiency of 3D rendering tasks in cluster. It focuses research on a dynamic feedback load balance (DFLB) algorithm, the work principle of load sharing facility (LSF) and optimization of external scheduler plug-in. The algorithm can be applied into match and allocation phase of a scheduling cycle. Candidate hosts is prepared in sequence in match phase. And the scheduler makes allocation decisions for each job in allocation phase. With the dynamic mechanism, new weight is assigned to each candidate host for rearrangement. The most suitable one will be dispatched for rendering. A new plugin module of this algorithm has been designed and integrated into the internal scheduler. Simulation experiments demonstrate the ability of improved plugin module is superior to the default one for rendering tasks. It can help avoid load imbalance among servers, increase system throughput and improve system utilization.

  14. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  15. Improving warm rain estimation in the PERSIANN-CCS satellite-based retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.

    2015-12-01

    The Precipitation Estimation from remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) is one of the algorithms being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to estimate precipitation at 0.04 lat-long scale every 30-minute. PERSIANN-CCS extracts features from infrared cloud image segmentation from three brightness temperature thresholds (220K, 235K, and 253K). Warm raining clouds with brightness temperature higher than 253K are not covered from the current algorithm. To improve rain detection from warm rain, in this study, the cloud image segmentation threshold to cover warmer clouds is extended from 253K to 300K. Several other temperature thresholds between 253K and 300K were also examined. K-means cluster algorithm was used to classify extracted image features to 400 groups. Rainfall rates from each cluster were retrained using radar rainfall measurements. Case studies were carried out over CONUS to investigate the ability to improve detection of warm rainfall from segmentation and image classification using warmer temperature thresholds. Satellite image and radar rainfall data in both summer and winter seasons were used in the experiments in year 2012 as a training data. Overall results show that rain detection from warm clouds is significantly improved. However, it also shows that the false rain detection is also relatively increased when the segmentation temperature is increased.

  16. GRISOTTO: A greedy approach to improve combinatorial algorithms for motif discovery with prior knowledge

    PubMed Central

    2011-01-01

    Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505

  17. Improved CICA algorithm used for single channel compound fault diagnosis of rolling bearings

    NASA Astrophysics Data System (ADS)

    Chen, Guohua; Qie, Longfei; Zhang, Aijun; Han, Jin

    2016-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to realize single channel compound fault diagnosis of bearings and improve the diagnosis accuracy, an improved CICA algorithm named constrained independent component analysis based on the energy method (E-CICA) is proposed. With the approach, the single channel vibration signal is firstly decomposed into several wavelet coefficients by discrete wavelet transform(DWT) method for the purpose of obtaining multichannel signals. Then the envelope signals of the reconstructed wavelet coefficients are selected as the input of E-CICA algorithm, which fulfills the requirements that the number of sensors is greater than or equal to that of the source signals and makes it more suitable to be processed by CICA strategy. The frequency energy ratio(ER) of each wavelet reconstructed signal to the total energy of the given synchronous signal is calculated, and then the synchronous signal with maximum ER value is set as the reference signal accordingly. By this way, the reference signal contains a priori knowledge of fault source signal and the influence on fault signal extraction accuracy which is caused by the initial phase angle and the duty ratio of the reference signal in the traditional CICA algorithm is avoided. Experimental results show that E-CICA algorithm can effectively separate out the outer-race defect and the rollers defect from the single channel compound fault and fulfill the needs of compound fault diagnosis of rolling bearings, and the running time is 0.12% of that of the traditional CICA algorithm and the extraction accuracy is 1.4 times of that of CICA as well. The proposed research provides a new method to separate single channel compound fault signals.

  18. Improved Algorithms for Accurate Retrieval of UV - Visible Diffuse Attenuation Coefficients in Optically Complex, Inshore Waters

    NASA Technical Reports Server (NTRS)

    Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.

    2014-01-01

    Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This

  19. An Improved Greedy Search Algorithm for the Development of a Phonetically Rich Speech Corpus

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Song; Nakamura, Satoshi

    An efficient way to develop large scale speech corpora is to collect phonetically rich ones that have high coverage of phonetic contextual units. The sentence set, usually called as the minimum set, should have small text size in order to reduce the collection cost. It can be selected by a greedy search algorithm from a large mother text corpus. With the inclusion of more and more phonetic contextual effects, the number of different phonetic contextual units increased dramatically, making the search not a trivial issue. In order to improve the search efficiency, we previously proposed a so-called least-to-most-ordered greedy search based on the conventional algorithms. This paper evaluated these algorithms in order to show their different characteristics. The experimental results showed that the least-to-most-ordered methods successfully achieved smaller objective sets at significantly less computation time, when compared with the conventional ones. This algorithm has already been applied to the development a number of speech corpora, including a large scale phonetically rich Chinese speech corpus ATRPTH which played an important role in developing our multi-language translation system.

  20. An adaptive displacement estimation algorithm for improved reconstruction of thermal strain.

    PubMed

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M; Tillman, Bryan; Leers, Steven A; Kim, Kang

    2015-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas' estimator and time-shift estimators such as normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas' estimator is limited by phase-wrapping and NXcorr performs poorly when the SNR is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas' estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas' estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas' estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI showed that the adaptive displacement estimator was less biased than either Loupas' estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7 to 350% and the spatial accuracy by 1.2 to 23.0% (P < 0.001). An ex vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and resulted in improved strain reconstruction.

  1. An Adaptive Displacement Estimation Algorithm for Improved Reconstruction of Thermal Strain

    PubMed Central

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M.; Tillman, Bryan; Leers, Steven A.; Kim, Kang

    2014-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas’ estimator and time-shift estimators like normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas’ estimator is limited by phase-wrapping and NXcorr performs poorly when the signal-to-noise ratio (SNR) is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas’ estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex-vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas’ estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas’ estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI using Field II showed that the adaptive displacement estimator was less biased than either Loupas’ estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7–350% and the spatial accuracy by 1.2–23.0% (p < 0.001). An ex-vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and results in improved strain reconstruction. PMID:25585398

  2. Substantially Oxygen-Free Contact Tube

    NASA Technical Reports Server (NTRS)

    Pike, James F. (Inventor)

    1991-01-01

    A device for arc welding is provided in which a continuously-fed electrode wire is in electrical contact with a contact tube. The contact tube is improved by using a substantially oxygen-free conductive alloy in order to reduce the amount of electrical erosion.

  3. Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm

    PubMed Central

    Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods. PMID:26649071

  4. A diabetic retinopathy detection method using an improved pillar K-means algorithm.

    PubMed

    Gogula, Susmitha Valli; Divakar, Ch; Satyanarayana, Ch; Rao, Allam Appa

    2014-01-01

    The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage.

  5. Improving lesion detectability in PET imaging with a penalized likelihood reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Wangerin, Kristen A.; Ahn, Sangtae; Ross, Steven G.; Kinahan, Paul E.; Manjeshwar, Ravindra M.

    2015-03-01

    Ordered Subset Expectation Maximization (OSEM) is currently the most widely used image reconstruction algorithm for clinical PET. However, OSEM does not necessarily provide optimal image quality, and a number of alternative algorithms have been explored. We have recently shown that a penalized likelihood image reconstruction algorithm using the relative difference penalty, block sequential regularized expectation maximization (BSREM), achieves more accurate lesion quantitation than OSEM, and importantly, maintains acceptable visual image quality in clinical wholebody PET. The goal of this work was to evaluate lesion detectability with BSREM versus OSEM. We performed a twoalternative forced choice study using 81 patient datasets with lesions of varying contrast inserted into the liver and lung. At matched imaging noise, BSREM and OSEM showed equivalent detectability in the lungs, and BSREM outperformed OSEM in the liver. These results suggest that BSREM provides not only improved quantitation and clinically acceptable visual image quality as previously shown but also improved lesion detectability compared to OSEM. We then modeled this detectability study, applying both nonprewhitening (NPW) and channelized Hotelling (CHO) model observers to the reconstructed images. The CHO model observer showed good agreement with the human observers, suggesting that we can apply this model to future studies with varying simulation and reconstruction parameters.

  6. Dimensionality Reduction in Complex Medical Data: Improved Self-Adaptive Niche Genetic Algorithm.

    PubMed

    Zhu, Min; Xia, Jing; Yan, Molei; Cai, Guolong; Yan, Jing; Ning, Gangmin

    2015-01-01

    With the development of medical technology, more and more parameters are produced to describe the human physiological condition, forming high-dimensional clinical datasets. In clinical analysis, data are commonly utilized to establish mathematical models and carry out classification. High-dimensional clinical data will increase the complexity of classification, which is often utilized in the models, and thus reduce efficiency. The Niche Genetic Algorithm (NGA) is an excellent algorithm for dimensionality reduction. However, in the conventional NGA, the niche distance parameter is set in advance, which prevents it from adjusting to the environment. In this paper, an Improved Niche Genetic Algorithm (INGA) is introduced. It employs a self-adaptive niche-culling operation in the construction of the niche environment to improve the population diversity and prevent local optimal solutions. The INGA was verified in a stratification model for sepsis patients. The results show that, by applying INGA, the feature dimensionality of datasets was reduced from 77 to 10 and that the model achieved an accuracy of 92% in predicting 28-day death in sepsis patients, which is significantly higher than other methods.

  7. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  8. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  9. Production of substantially pure fructose

    SciTech Connect

    Hatcher, H.J.; Gallian, J.J.; Leeper, S.A.

    1990-05-22

    This patent describes a process for the production of a substantially pure product containing greater than 60% fructose. It comprises: combining a sucrose-containing substrate with effective amounts of a levansucrase enzyme preparation to form levan and glucose; purifying the levan by at least one of the following purification methods: ultrafiltration, diafiltration, hyperfiltration, reverse osmosis, liquid--liquid partition, solvent extraction, chromatography, and precipitation; hydrolyzing the levan to form fructose substantially free of glucose and sucrose; and recovering the fructose by at least one of the following recovery methods: hyperfiltration, reverse osmosis, evaporation, drying, crystallization, and chromatography.

  10. Improved near-infrared ocean reflectance correction algorithm for satellite ocean color data processing.

    PubMed

    Jiang, Lide; Wang, Menghua

    2014-09-01

    A new approach for the near-infrared (NIR) ocean reflectance correction in atmospheric correction for satellite ocean color data processing in coastal and inland waters is proposed, which combines the advantages of the three existing NIR ocean reflectance correction algorithms, i.e., Bailey et al. (2010) [Opt. Express18, 7521 (2010)Appl. Opt.39, 897 (2000)Opt. Express20, 741 (2012)], and is named BMW. The normalized water-leaving radiance spectra nLw(λ) obtained from this new NIR-based atmospheric correction approach are evaluated against those obtained from the shortwave infrared (SWIR)-based atmospheric correction algorithm, as well as those from some existing NIR atmospheric correction algorithms based on several case studies. The scenes selected for case studies are obtained from two different satellite ocean color sensors, i.e., the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua and the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP), with an emphasis on several turbid water regions in the world. The new approach has shown to produce nLw(λ) spectra most consistent with the SWIR results among all NIR algorithms. Furthermore, validations against the in situ measurements also show that in less turbid water regions the new approach produces reasonable and similar results comparable to the current operational algorithm. In addition, by combining the new NIR atmospheric correction with the SWIR-based approach, the new NIR-SWIR atmospheric correction can produce further improved ocean color products. The new NIR atmospheric correction can be implemented in a global operational satellite ocean color data processing system.

  11. Improved near-infrared ocean reflectance correction algorithm for satellite ocean color data processing.

    PubMed

    Jiang, Lide; Wang, Menghua

    2014-09-01

    A new approach for the near-infrared (NIR) ocean reflectance correction in atmospheric correction for satellite ocean color data processing in coastal and inland waters is proposed, which combines the advantages of the three existing NIR ocean reflectance correction algorithms, i.e., Bailey et al. (2010) [Opt. Express18, 7521 (2010)Appl. Opt.39, 897 (2000)Opt. Express20, 741 (2012)], and is named BMW. The normalized water-leaving radiance spectra nLw(λ) obtained from this new NIR-based atmospheric correction approach are evaluated against those obtained from the shortwave infrared (SWIR)-based atmospheric correction algorithm, as well as those from some existing NIR atmospheric correction algorithms based on several case studies. The scenes selected for case studies are obtained from two different satellite ocean color sensors, i.e., the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua and the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP), with an emphasis on several turbid water regions in the world. The new approach has shown to produce nLw(λ) spectra most consistent with the SWIR results among all NIR algorithms. Furthermore, validations against the in situ measurements also show that in less turbid water regions the new approach produces reasonable and similar results comparable to the current operational algorithm. In addition, by combining the new NIR atmospheric correction with the SWIR-based approach, the new NIR-SWIR atmospheric correction can produce further improved ocean color products. The new NIR atmospheric correction can be implemented in a global operational satellite ocean color data processing system. PMID:25321543

  12. Branch-pipe-routing approach for ships using improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sui, Haiteng; Niu, Wentie

    2016-09-01

    Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.

  13. Use of a genetic algorithm to improve the rail profile on Stockholm underground

    NASA Astrophysics Data System (ADS)

    Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon

    2010-12-01

    In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.

  14. Improved algorithm for processing grating-based phase contrast interferometry image sets

    SciTech Connect

    Marathe, Shashidhara Assoufid, Lahsen Xiao, Xianghui; Ham, Kyungmin; Johnson, Warren W.; Butler, Leslie G.

    2014-01-15

    Grating-based X-ray and neutron interferometry tomography using phase-stepping methods generates large data sets. An improved algorithm is presented for solving for the parameters to calculate transmissions, differential phase contrast, and dark-field images. The method takes advantage of the vectorization inherent in high-level languages such as Mathematica and MATLAB and can solve a 16 × 1k × 1k data set in less than a second. In addition, the algorithm can function with partial data sets. This is demonstrated with processing of a 16-step grating data set with partial use of the original data chosen without any restriction. Also, we have calculated the reduced chi-square for the fit and notice the effect of grating support structural elements upon the differential phase contrast image and have explored expanded basis set representations to mitigate the impact.

  15. Improved neural network algorithm: application in the compensation of wavefront distortion

    NASA Astrophysics Data System (ADS)

    Zhou, Zhou; Yuan, Xiuhua; Wang, Jin

    2008-12-01

    A Free Space Optical Communication (FSO) system transmits modulated light through atmospheric media. Because of the uneven distribution of refractive index result from atmospheric turbulence, the phase distribution of light is changed leading to distortion of wavefront and requiring reconstruction at the receiver. However, current wavefront compensation relies on channel modeling which has difficulties in extracting channel information from highly random turbulent atmosphere. In this paper, a wavefront reconstruction system based on neural network algorithm is constructed. The neural network requires little channel information but predicts distortion by past experience. Then, distorted phase distribution is adaptively revised when light passes through a piezoelectric ceramic deformable mirror controlled by neural network. Dynamic study factors are added to neural network algorithm as improvement which adjusts the study speed of the system according to turbulence intensity providing best result between respond time and reconstruction accuracy. In addition, light transmitted in atmospheric channel is studied.

  16. Branch-pipe-routing approach for ships using improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sui, Haiteng; Niu, Wentie

    2016-05-01

    Branch-pipe routing plays fundamental and critical roles in ship-pipe design. The branch-pipe-routing problem is a complex combinatorial optimization problem and is thus difficult to solve when depending only on human experts. A modified genetic-algorithm-based approach is proposed in this paper to solve this problem. The simplified layout space is first divided into threedimensional (3D) grids to build its mathematical model. Branch pipes in layout space are regarded as a combination of several two-point pipes, and the pipe route between two connection points is generated using an improved maze algorithm. The coding of branch pipes is then defined, and the genetic operators are devised, especially the complete crossover strategy that greatly accelerates the convergence speed. Finally, simulation tests demonstrate the performance of proposed method.

  17. Improvement of relief algorithm to prevent inpatient's downfall accident with night-vision CCD camera

    NASA Astrophysics Data System (ADS)

    Matsuda, Noriyuki; Yamamoto, Takeshi; Miwa, Masafumi; Nukumi, Shinobu; Mori, Kumiko; Kuinose, Yuko; Maeda, Etuko; Miura, Hirokazu; Taki, Hirokazu; Hori, Satoshi; Abe, Norihiro

    2005-12-01

    "ROSAI" hospital, Wakayama City in Japan, reported that inpatient's bed-downfall is one of the most serious accidents in hospital at night. Many inpatients have been having serious damages from downfall accidents from a bed. To prevent accidents, the hospital tested several sensors in a sickroom to send warning-signal of inpatient's downfall accidents to a nurse. However, it sent too much inadequate wrong warning about inpatients' sleeping situation. To send a nurse useful information, precise automatic detection for an inpatient's sleeping situation is necessary. In this paper, we focus on a clustering-algorithm which evaluates inpatient's situation from multiple angles by several kinds of sensor including night-vision CCD camera. This paper indicates new relief algorithm to improve the weakness about exceptional cases.

  18. Using frequency analysis to improve the precision of human body posture algorithms based on Kalman filters.

    PubMed

    Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G

    2016-05-01

    With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice.

  19. Production of substantially pure fructose

    DOEpatents

    Hatcher, Herbert J.; Gallian, John J.; Leeper, Stephen A.

    1990-01-01

    A process is disclosed for the production of substantially pure fructose from sucrose-containing substrates. The process comprises converting the sucrose to levan and glucose, purifying the levan by membrane technology, hydrolyzing the levan to form fructose monomers, and recovering the fructose.

  20. Recent improvements in efficiency, accuracy, and convergence for implicit approximate factorization algorithms. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Steger, J. L.

    1985-01-01

    In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.

  1. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications

    PubMed Central

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation. PMID:26958441

  2. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications.

    PubMed

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation. PMID:26958441

  3. Improved genetic algorithm for the protein folding problem by use of a Cartesian combination operator.

    PubMed Central

    Rabow, A. A.; Scheraga, H. A.

    1996-01-01

    We have devised a Cartesian combination operator and coding scheme for improving the performance of genetic algorithms applied to the protein folding problem. The genetic coding consists of the C alpha Cartesian coordinates of the protein chain. The recombination of the genes of the parents is accomplished by: (1) a rigid superposition of one parent chain on the other, to make the relation of Cartesian coordinates meaningful, then, (2) the chains of the children are formed through a linear combination of the coordinates of their parents. The children produced with this Cartesian combination operator scheme have similar topology and retain the long-range contacts of their parents. The new scheme is significantly more efficient than the standard genetic algorithm methods for locating low-energy conformations of proteins. The considerable superiority of genetic algorithms over Monte Carlo optimization methods is also demonstrated. We have also devised a new dynamic programming lattice fitting procedure for use with the Cartesian combination operator method. The procedure finds excellent fits of real-space chains to the lattice while satisfying bond-length, bond-angle, and overlap constraints. PMID:8880904

  4. An improved phase shift reconstruction algorithm of fringe scanning technique for X-ray microscopy

    SciTech Connect

    Lian, S.; Yang, H.; Kudo, H.; Momose, A.; Yashiro, W.

    2015-02-15

    The X-ray phase imaging method has been applied to observe soft biological tissues, and it is possible to image the soft tissues by using the benefit of the so-called “Talbot effect” by an X-ray grating. One type of the X-ray phase imaging method was reported by combining an X-ray imaging microscope equipped by a Fresnel zone plate with a phase grating. Using the fringe scanning technique, a high-precision phase shift image could be obtained by displacing the grating step by step and measuring dozens of sample images. The number of the images was selected to reduce the error caused by the non-sinusoidal component of the Talbot self-image at the imaging plane. A larger number suppressed the error more but increased radiation exposure and required higher mechanical stability of equipment. In this paper, we analyze the approximation error of fringe scanning technique for the X-ray microscopy which uses just one grating and proposes an improved algorithm. We compute the approximation error by iteration and substitute that into the process of reconstruction of phase shift. This procedure will suppress the error even with few sample images. The results of simulation experiments show that the precision of phase shift image reconstructed by the proposed algorithm with 4 sample images is almost the same as that reconstructed by the conventional algorithm with 40 sample images. We also have succeeded in the experiment with real data.

  5. [Research of Feedback Algorithm and Deformable Model Based on Improved Spring-mass Model].

    PubMed

    Chen, Weidong; Chen, Panpan; Zhu, Qiguang

    2015-10-01

    A new diamond-based variable spring-mass model has been proposed in this study. It can realize the deformation simulation for different organs by changing the length of the springs, spring coefficient and initial angle. The virtual spring joined in the model is used to provide constraint and to avoid hyperelastic phenomenon when excessive force appears. It is also used for the calculation of force feedback in the deformation process. With the deformation force feedback algorithm, we calculated the deformation area of each layer through screening effective particles, and contacted the deformation area with the force. This simplified the force feedback algorithm of traditional spring-particle model. The deformation simulation was realized by the PHANTOM haptic interaction devices based on this model. The experimental results showed that the model had the advantage of simple structure and of being easy to implement. The deformation force feedback algorithm reduces the number of the deformation calculation, improves the real-time deformation and has a more realistic deformation effect.

  6. MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Hao; Li, Na; Xu, Shiyou; Chen, Zengping

    2014-10-01

    Migration through resolution cells (MTRC) is generated in high-resolution inverse synthetic aperture radar (ISAR) imaging. A MTRC compensation algorithm for high-resolution ISAR imaging based on improved polar format algorithm (PFA) is proposed in this paper. Firstly, in the situation that a rigid-body target stably flies, the initial value of the rotation angle and center of the target is obtained from the rotation of radar line of sight (RLOS) and high range resolution profile (HRRP). Then, the PFA is iteratively applied to the echo data to search the optimization solution based on minimum entropy criterion. The procedure starts with the estimated initial rotation angle and center, and terminated when the entropy of the compensated ISAR image is minimized. To reduce the computational load, the 2-D iterative search is divided into two 1-D search. One is carried along the rotation angle and the other one is carried along rotation center. Each of the 1-D searches is realized by using of the golden section search method. The accurate rotation angle and center can be obtained when the iterative search terminates. Finally, apply the PFA to compensate the MTRC by the use of the obtained optimized rotation angle and center. After MTRC compensation, the ISAR image can be best focused. Simulated and real data demonstrate the effectiveness and robustness of the proposed algorithm.

  7. Evaluating some computer enhancement algorithms that improve the visibility of cometary morphology

    NASA Technical Reports Server (NTRS)

    Larson, S. M.; Slaughter, C. D.

    1991-01-01

    The observed morphology of cometary comae is determined by ejection circumstances and the interaction of the ejected material with the local environment. Anisotropic emission can provide useful information on such things as orientation of the nucleus, location of active areas on the nucleus, and the formation of ion structure near the nucleus. However, discrete coma features are usually diffuse, of low amplitude, and superimposed on a steep intensity gradient radial to the nucleus. To improve the visibility of these features, a variety of digital enhancement algorithms were employed with varying degrees of success. They usually produce some degree of spatial filtering, and are chosen to optimize visibility of certain detail. Since information in the image is altered, it is important to understand the effects of parameter selection and processing artifacts can have on subsequent interpretation. Using the criteria that the ideal algorithm must enhance low contrast features while not introducing misleading artifacts (or features that cannot be seen in the stretched, unprocessed image), the suitability of various algorithms that aid cometary studies were assessed. The strong and weak points of each are identified in the context of maintaining positional integrity of features at the expense of photometric information.

  8. Effective application of improved profit-mining algorithm for the interday trading model.

    PubMed

    Hsieh, Yu-Lung; Yang, Don-Lin; Wu, Jungpin

    2014-01-01

    Many real world applications of association rule mining from large databases help users make better decisions. However, they do not work well in financial markets at this time. In addition to a high profit, an investor also looks for a low risk trading with a better rate of winning. The traditional approach of using minimum confidence and support thresholds needs to be changed. Based on an interday model of trading, we proposed effective profit-mining algorithms which provide investors with profit rules including information about profit, risk, and winning rate. Since profit-mining in the financial market is still in its infant stage, it is important to detail the inner working of mining algorithms and illustrate the best way to apply them. In this paper we go into details of our improved profit-mining algorithm and showcase effective applications with experiments using real world trading data. The results show that our approach is practical and effective with good performance for various datasets.

  9. Measuring Substantial Reductions in Activity

    PubMed Central

    Schafer, Charles; Evans, Meredyth; Jason, Leonard A.; So, Suzanna; Brown, Abigail

    2015-01-01

    The case definitions for Myalgic Encephalomyelitis/chronic fatigue syndrome (ME/CFS), Myalgic Encephalomyelitis (ME), and chronic fatigue syndrome (CFS) each include a disability criterion requiring substantial reductions in activity in order to meet diagnostic criteria. Difficulties have been encountered in defining and operationalizing the substantial reduction disability criterion within these various illness definitions. The present study sought to relate measures of past and current activities in several domains including the SF-36, an objective measure of activity (e.g. actigraphy), a self-reported quality of life scale, and measures of symptom severity. Results of the study revealed that current work activities had the highest number of significant associations with domains such as the SF-36 subscales, actigraphy, and symptom scores. As an example, higher self-reported levels of current work activity were associated with better health. This suggests that current work related activities may provide a useful domain for helping operationalize the construct of substantial reductions in activity. PMID:25584524

  10. Evaluation of an improved algorithm for producing realistic 3D breast software phantoms: Application for mammography

    SciTech Connect

    Bliznakova, K.; Suryanarayanan, S.; Karellas, A.; Pallikarakis, N.

    2010-11-15

    Purpose: This work presents an improved algorithm for the generation of 3D breast software phantoms and its evaluation for mammography. Methods: The improved methodology has evolved from a previously presented 3D noncompressed breast modeling method used for the creation of breast models of different size, shape, and composition. The breast phantom is composed of breast surface, duct system and terminal ductal lobular units, Cooper's ligaments, lymphatic and blood vessel systems, pectoral muscle, skin, 3D mammographic background texture, and breast abnormalities. The key improvement is the development of a new algorithm for 3D mammographic texture generation. Simulated images of the enhanced 3D breast model without lesions were produced by simulating mammographic image acquisition and were evaluated subjectively and quantitatively. For evaluation purposes, a database with regions of interest taken from simulated and real mammograms was created. Four experienced radiologists participated in a visual subjective evaluation trial, as they judged the quality of the simulated mammograms, using the new algorithm compared to mammograms, obtained with the old modeling approach. In addition, extensive quantitative evaluation included power spectral analysis and calculation of fractal dimension, skewness, and kurtosis of simulated and real mammograms from the database. Results: The results from the subjective evaluation strongly suggest that the new methodology for mammographic breast texture creates improved breast models compared to the old approach. Calculated parameters on simulated images such as {beta} exponent deducted from the power law spectral analysis and fractal dimension are similar to those calculated on real mammograms. The results for the kurtosis and skewness are also in good coincidence with those calculated from clinical images. Comparison with similar calculations published in the literature showed good agreement in the majority of cases. Conclusions: The

  11. 3D resistivity inversion using an improved Genetic Algorithm based on control method of mutation direction

    NASA Astrophysics Data System (ADS)

    Liu, B.; Li, S. C.; Nie, L. C.; Wang, J.; L, X.; Zhang, Q. S.

    2012-12-01

    Traditional inversion method is the most commonly used procedure for three-dimensional (3D) resistivity inversion, which usually takes the linearization of the problem and accomplish it by iterations. However, its accuracy is often dependent on the initial model, which can make the inversion trapped in local optima, even cause a bad result. Non-linear method is a feasible way to eliminate the dependence on the initial model. However, for large problems such as 3D resistivity inversion with inversion parameters exceeding a thousand, main challenges of non-linear method are premature and quite low search efficiency. To deal with these problems, we present an improved Genetic Algorithm (GA) method. In the improved GA method, smooth constraint and inequality constraint are both applied on the object function, by which the degree of non-uniqueness and ill-conditioning is decreased. Some measures are adopted from others by reference to maintain the diversity and stability of GA, e.g. real-coded method, and the adaptive adjustment of crossover and mutation probabilities. Then a generation method of approximately uniform initial population is proposed in this paper, with which uniformly distributed initial generation can be produced and the dependence on initial model can be eliminated. Further, a mutation direction control method is presented based on the joint algorithm, in which the linearization method is embedded in GA. The update vector produced by linearization method is used as mutation increment to maintain a better search direction compared with the traditional GA with non-controlled mutation operation. By this method, the mutation direction is optimized and the search efficiency is improved greatly. The performance of improved GA is evaluated by comparing with traditional inversion results in synthetic example or with drilling columnar sections in practical example. The synthetic and practical examples illustrate that with the improved GA method we can eliminate

  12. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    NASA Astrophysics Data System (ADS)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  13. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage

    PubMed Central

    Lee, Kyuman; Baek, Hoki; Lim, Jaesung

    2016-01-01

    The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user

  14. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage.

    PubMed

    Lee, Kyuman; Baek, Hoki; Lim, Jaesung

    2016-01-01

    The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user

  15. Improvements in dark water, low light-level AOD retrievals in MISR operational algorithm

    NASA Astrophysics Data System (ADS)

    Witek, M. L.; Diner, D. J.; Garay, M. J.; Xu, F.

    2015-12-01

    Satellite remote sensing of aerosols is taking bold steps towards higher spatial resolutions, as evidenced by the newly released MODIS 3 km product and the soon to be released MISR 4.4 km product. Finer horizontal resolution allows for a better aerosol characterization in proximity to clouds—which is important for studying indirect aerosol effects—but also poses additional challenges due to various cloud artifact effects. It is therefore imperative to refine satellite algorithms to correctly interpret aerosol behavior in the proximity of clouds. For instance, MISR aerosol optical depth (AOD) retrievals frequently overestimate AODs in pristine oceanic areas, in particular close to Antarctica, as evidenced by comparison with Maritime Aerosol Network (MAN) observations. We trace the origin of this overestimation to stray light, or veiling light, being scattered more or less uniformly over the camera's field of view and reducing the contrast of the primary image. We found that the MISR-MODIS radiance difference in dark areas correlates with average scene brightness within the whole MISR camera field of view. A simple, single parameter model is proposed to effect the corrections. Collocated MISR/MODIS pixels are used to fit the parameter in the MISR nadir camera. For the off-nadir cameras two alternative approaches are employed that are based on MISR radiances and radiative transfer model calculations. These two methods are prone to higher uncertainties, but suggest somewhat increasing correction values for the longer focal length cameras. Finally, the empirical corrections applied in the operational MISR retrieval algorithm substantially decrease AODs in analyzed cases, and lead to closer agreement with MAN and MODIS, proving the efficacy of the developed procedure.

  16. FRESCO+: an improved O2 A-band cloud retrieval algorithm for tropospheric trace gas retrievals

    NASA Astrophysics Data System (ADS)

    Wang, P.; Stammes, P.; van der A, R.; Pinardi, G.; van Roozendael, M.

    2008-11-01

    The FRESCO (Fast Retrieval Scheme for Clouds from the Oxygen A-band) algorithm has been used to retrieve cloud information from measurements of the O2 A-band around 760 nm by GOME, SCIAMACHY and GOME-2. The cloud parameters retrieved by FRESCO are the effective cloud fraction and cloud pressure, which are used for cloud correction in the retrieval of trace gases like O3 and NO2. To improve the cloud pressure retrieval for partly cloudy scenes, single Rayleigh scattering has been included in an improved version of the algorithm, called FRESCO+. We compared FRESCO+ and FRESCO effective cloud fractions and cloud pressures using simulated spectra and one month of GOME measured spectra. As expected, FRESCO+ gives more reliable cloud pressures over partly cloudy pixels. Simulations and comparisons with ground-based radar/lidar measurements of clouds show that the FRESCO+ cloud pressure is about the optical midlevel of the cloud. Globally averaged, the FRESCO+ cloud pressure is about 50 hPa higher than the FRESCO cloud pressure, while the FRESCO+ effective cloud fraction is about 0.01 larger. The effect of FRESCO+ cloud parameters on O3 and NO2 vertical column density (VCD) retrievals is studied using SCIAMACHY data and ground-based DOAS measurements. We find that the FRESCO+ algorithm has a significant effect on tropospheric NO2 retrievals but a minor effect on total O3 retrievals. The retrieved SCIAMACHY tropospheric NO2 VCDs using FRESCO+ cloud parameters (v1.1) are lower than the tropospheric NO2VCDs which used FRESCO cloud parameters (v1.04), in particular over heavily polluted areas with low clouds. The difference between SCIAMACHY tropospheric NO2 VCDs v1.1 and ground-based MAXDOAS measurements performed in Cabauw, The Netherlands, during the DANDELIONS campaign is about -2.12×1014molec cm-2.

  17. FRESCO+: an improved O2 A-band cloud retrieval algorithm for tropospheric trace gas retrievals

    NASA Astrophysics Data System (ADS)

    Wang, P.; Stammes, P.; van der A, R.; Pinardi, G.; van Roozendael, M.

    2008-05-01

    The FRESCO (Fast Retrieval Scheme for Clouds from the Oxygen A-band) algorithm has been used to retrieve cloud information from measurements of the O2 A-band around 760 nm by GOME, SCIAMACHY and GOME-2. The cloud parameters retrieved by FRESCO are the effective cloud fraction and cloud pressure, which are used for cloud correction in the retrieval of trace gases like O3 and NO2. To improve the cloud pressure retrieval for partly cloudy scenes, single Rayleigh scattering has been included in an improved version of the algorithm, called FRESCO+. We compared FRESCO+ and FRESCO effective cloud fractions and cloud pressures using simulated spectra and one month of GOME measured spectra. As expected, FRESCO+ gives more reliable cloud pressures over partly cloudy pixels. Simulations and comparisons with ground-based radar/lidar measurements of clouds shows that the FRESCO+ cloud pressure is about the optical midlevel of the cloud. Globally averaged, the FRESCO+ cloud pressure is about 50 hPa higher than the FRESCO cloud pressure, while the FRESCO+ effective cloud fraction is about 0.01 larger. The effect of FRESCO+ cloud parameters on O3 and NO2 vertical column densities (VCD) is studied using SCIAMACHY data and ground-based DOAS measurements. We find that the FRESCO+ algorithm has a significant effect on tropospheric NO2 retrievals but a minor effect on total O3 retrievals. The retrieved SCIAMACHY tropospheric NO2 VCDs using FRESCO+ cloud parameters (v1.1) are lower than the tropospheric NO2 VCDs which used FRESCO cloud parameters (v1.04), in particular over heavily polluted areas with low clouds. The difference between SCIAMACHY tropospheric NO2 VCDs v1.1 and ground-based MAXDOAS measurements performed in Cabauw, The Netherlands, during the DANDELIONS campaign is about -2.12×1014 molec cm-2.

  18. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  19. An improved fusion algorithm for infrared and visible images based on multi-scale transform

    NASA Astrophysics Data System (ADS)

    Li, He; Liu, Lei; Huang, Wei; Yue, Chao

    2016-01-01

    In this paper, an improved fusion algorithm for infrared and visible images based on multi-scale transform is proposed. First of all, Morphology-Hat transform is used for an infrared image and a visible image separately. Then two images were decomposed into high-frequency and low-frequency images by contourlet transform (CT). The fusion strategy of high-frequency images is based on mean gradient and the fusion strategy of low-frequency images is based on Principal Component Analysis (PCA). Finally, the final fused image is obtained by using the inverse contourlet transform (ICT). The experiments and results demonstrate that the proposed method can significantly improve image fusion performance, accomplish notable target information and high contrast and preserve rich details information at the same time.

  20. Effective followership: A standardized algorithm to resolve clinical conflicts and improve teamwork.

    PubMed

    Sculli, Gary L; Fore, Amanda M; Sine, David M; Paull, Douglas E; Tschannen, Dana; Aebersold, Michelle; Seagull, F Jacob; Bagian, James P

    2015-01-01

    In healthcare, the sustained presence of hierarchy between team members has been cited as a common contributor to communication breakdowns. Hierarchy serves to accentuate either actual or perceived chains of command, which may result in team members failing to challenge decisions made by leaders, despite concerns about adverse patient outcomes. While other tools suggest improved communication, none focus specifically on communication skills for team followers, nor do they provide techniques to immediately challenge authority and escalate assertiveness at a given moment in real time. This article presents data that show one such strategy, called the Effective Followership Algorithm, offering statistically significant improvements in team communication across the professional continuum from students and residents to experienced clinicians. PMID:26227290

  1. Simulation of Long Lived Tracers Using an Improved Empirically Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal

  2. Improving Markov Chain Monte Carlo algorithms in LISA Pathfinder Data Analysis

    NASA Astrophysics Data System (ADS)

    Karnesis, N.; Nofrarias, M.; Sopuerta, C. F.; Lobo, A.

    2012-06-01

    The LISA Pathfinder mission (LPF) aims to test key technologies for the future LISA mission. The LISA Technology Package (LTP) on-board LPF will consist of an exhaustive suite of experiments and its outcome will be crucial for the future detection of gravitational waves. In order to achieve maximum sensitivity, we need to have an understanding of every instrument on-board and parametrize the properties of the underlying noise models. The Data Analysis team has developed algorithms for parameter estimation of the system. A very promising one implemented for LISA Pathfinder data analysis is the Markov Chain Monte Carlo. A series of experiments are going to take place during flight operations and each experiment is going to provide us with essential information for the next in the sequence. Therefore, it is a priority to optimize and improve our tools available for data analysis during the mission. Using a Bayesian framework analysis allows us to apply prior knowledge for each experiment, which means that we can efficiently use our prior estimates for the parameters, making the method more accurate and significantly faster. This, together with other algorithm improvements, will lead us to our main goal, which is no other than creating a robust and reliable tool for parameter estimation during the LPF mission.

  3. Algorithms to Improve the Prediction of Postprandial Insulinaemia in Response to Common Foods

    PubMed Central

    Bell, Kirstine J.; Petocz, Peter; Colagiuri, Stephen; Brand-Miller, Jennie C.

    2016-01-01

    Dietary patterns that induce excessive insulin secretion may contribute to worsening insulin resistance and beta-cell dysfunction. Our aim was to generate mathematical algorithms to improve the prediction of postprandial glycaemia and insulinaemia for foods of known nutrient composition, glycemic index (GI) and glycemic load (GL). We used an expanded database of food insulin index (FII) values generated by testing 1000 kJ portions of 147 common foods relative to a reference food in lean, young, healthy volunteers. Simple and multiple linear regression analyses were applied to validate previously generated equations for predicting insulinaemia, and develop improved predictive models. Large differences in insulinaemic responses within and between food groups were evident. GL, GI and available carbohydrate content were the strongest predictors of the FII, explaining 55%, 51% and 47% of variation respectively. Fat, protein and sugar were significant but relatively weak predictors, accounting for only 31%, 7% and 13% of the variation respectively. Nutritional composition alone explained only 50% of variability. The best algorithm included a measure of glycemic response, sugar and protein content and explained 78% of variation. Knowledge of the GI or glycaemic response to 1000 kJ portions together with nutrient composition therefore provides a good approximation for ranking of foods according to their “insulin demand”. PMID:27070641

  4. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    SciTech Connect

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on the performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.

  5. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    DOE PAGES

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less

  6. Algorithms to Improve the Prediction of Postprandial Insulinaemia in Response to Common Foods.

    PubMed

    Bell, Kirstine J; Petocz, Peter; Colagiuri, Stephen; Brand-Miller, Jennie C

    2016-01-01

    Dietary patterns that induce excessive insulin secretion may contribute to worsening insulin resistance and beta-cell dysfunction. Our aim was to generate mathematical algorithms to improve the prediction of postprandial glycaemia and insulinaemia for foods of known nutrient composition, glycemic index (GI) and glycemic load (GL). We used an expanded database of food insulin index (FII) values generated by testing 1000 kJ portions of 147 common foods relative to a reference food in lean, young, healthy volunteers. Simple and multiple linear regression analyses were applied to validate previously generated equations for predicting insulinaemia, and develop improved predictive models. Large differences in insulinaemic responses within and between food groups were evident. GL, GI and available carbohydrate content were the strongest predictors of the FII, explaining 55%, 51% and 47% of variation respectively. Fat, protein and sugar were significant but relatively weak predictors, accounting for only 31%, 7% and 13% of the variation respectively. Nutritional composition alone explained only 50% of variability. The best algorithm included a measure of glycemic response, sugar and protein content and explained 78% of variation. Knowledge of the GI or glycaemic response to 1000 kJ portions together with nutrient composition therefore provides a good approximation for ranking of foods according to their "insulin demand".

  7. An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm

    NASA Astrophysics Data System (ADS)

    Jacques, Robert; McNutt, Todd

    2014-03-01

    Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.

  8. A Method for Streamlining and Assessing Sound Velocity Profiles Based on Improved D-P Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, D.; WU, Z. Y.; Zhou, J.

    2015-12-01

    A multi-beam system transmits sound waves and receives the round-trip time of their reflection or scattering, and thus it is possible to determine the depth and coordinates of the detected targets using the sound velocity profile (SVP) based on Snell's Law. The SVP is determined by a device. Because of the high sampling rate of the modern device, the operational time of ray tracing and beam footprint reduction will increase, lowering the overall efficiency. To promote the timeliness of multi-beam surveys and data processing, redundant points in the original SVP must be screened out and at the same time, errors following the streamlining of the SVP must be evaluated and controlled. We presents a new streamlining and evaluation method based on the Maximum Offset of sound Velocity (MOV) algorithm. Based on measured SVP data, this method selects sound velocity data points by calculating the maximum distance to the sound-velocity-dimension based on an improved Douglas-Peucker Algorithm to streamline the SVP (Fig. 1). To evaluate whether the streamlined SVP meets the desired accuracy requirements, this method is divided into two parts: SVP streamlining, and an accuracy analysis of the multi-beam sounding data processing using the streamlined SVP. Therefore, the method is divided into two modules: the streamlining module and the evaluation module (Fig. 2). The streamlining module is used for streamlining the SVP. Its core is the MOV algorithm.To assess the accuracy of the streamlined SVP, we uses ray tracing and the percentage error analysis method to evaluate the accuracy of the sounding data both before and after streamlining the SVP (Fig. 3). By automatically optimizing the threshold, the reduction rate of sound velocity profile data can reach over 90% and the standard deviation percentage error of sounding data can be controlled to within 0.1% (Fig. 4). The optimized sound velocity profile data improved the operational efficiency of the multi-beam survey and data post

  9. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  10. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    PubMed Central

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  11. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    PubMed

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.

  12. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    PubMed

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  13. An improved hybrid encoding cuckoo search algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Jia, Ke; He, Yichao

    2014-01-01

    Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions. PMID:24527026

  14. An improved hybrid encoding cuckoo search algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Jia, Ke; He, Yichao

    2014-01-01

    Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions.

  15. Range image segmentation into planar and quadric surfaces using an improved robust estimator and genetic algorithm.

    PubMed

    Gotardo, Paulo Fabiano Urnau; Bellon, Olga Regina Pereira; Boyer, Kim L; Silva, Luciano

    2004-12-01

    This paper presents a novel range image segmentation method employing an improved robust estimator to iteratively detect and extract distinct planar and quadric surfaces. Our robust estimator extends M-estimator Sample Consensus/Random Sample Consensus (MSAC/RANSAC) to use local surface orientation information, enhancing the accuracy of inlier/outlier classification when processing noisy range data describing multiple structures. An efficient approximation to the true geometric distance between a point and a quadric surface also contributes to effectively reject weak surface hypotheses and avoid the extraction of false surface components. Additionally, a genetic algorithm was specifically designed to accelerate the optimization process of surface extraction, while avoiding premature convergence. We present thorough experimental results with quantitative evaluation against ground truth. The segmentation algorithm was applied to three real range image databases and competes favorably against eleven other segmenters using the most popular evaluation framework in the literature. Our approach lends itself naturally to parallel implementation and application in real-time tasks. The method fits well, into several of today's applications in man-made environments, such as target detection and autonomous navigation, for which obstacle detection, but not description or reconstruction, is required. It can also be extended to process point clouds resulting from range image registration.

  16. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  17. An Improved Artificial Bee Colony Algorithm Based on Balance-Evolution Strategy for Unmanned Combat Aerial Vehicle Path Planning

    PubMed Central

    Gong, Li-gang; Yang, Wen-lun

    2014-01-01

    Unmanned combat aerial vehicles (UCAVs) have been of great interest to military organizations throughout the world due to their outstanding capabilities to operate in dangerous or hazardous environments. UCAV path planning aims to obtain an optimal flight route with the threats and constraints in the combat field well considered. In this work, a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme. In this new algorithm, convergence information during the iteration is fully utilized to manipulate the exploration/exploitation accuracy and to pursue a balance between local exploitation and global exploration capabilities. Simulation results confirm that BE-ABC algorithm is more competent for the UCAV path planning scheme than the conventional ABC algorithm and two other state-of-the-art modified ABC algorithms. PMID:24790555

  18. An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm.

    PubMed

    Xu, Hongming; Lu, Cheng; Mandal, Mrinal

    2014-09-01

    In this paper, we propose an efficient method for segmenting cell nuclei in the skin histopathological images. The proposed technique consists of four modules. First, it separates the nuclei regions from the background with an adaptive threshold technique. Next, an elliptical descriptor is used to detect the isolated nuclei with elliptical shapes. This descriptor classifies the nuclei regions based on two ellipticity parameters. Nuclei clumps and nuclei with irregular shapes are then localized by an improved seed detection technique based on voting in the eroded nuclei regions. Finally, undivided nuclei regions are segmented by a marked watershed algorithm. Experimental results on 114 different image patches indicate that the proposed technique provides a superior performance in nuclei detection and segmentation.

  19. Exponential H ∞ Synchronization of Chaotic Cryptosystems Using an Improved Genetic Algorithm

    PubMed Central

    Hsiao, Feng-Hsiag

    2015-01-01

    This paper presents a systematic design methodology for neural-network- (NN-) based secure communications in multiple time-delay chaotic (MTDC) systems with optimal H ∞ performance and cryptography. On the basis of the Improved Genetic Algorithm (IGA), which is demonstrated to have better performance than that of a traditional GA, a model-based fuzzy controller is then synthesized to stabilize the MTDC systems. A fuzzy controller is synthesized to not only realize the exponential synchronization, but also achieve optimal H ∞ performance by minimizing the disturbance attenuation level. Furthermore, the error of the recovered message is stated by using the n-shift cipher and key. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach. PMID:26366432

  20. Improvement for detection of microcalcifications through clustering algorithms and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Quintanilla-Domínguez, Joel; Ojeda-Magaña, Benjamín; Marcano-Cedeño, Alexis; Cortina-Januchs, María G.; Vega-Corona, Antonio; Andina, Diego

    2011-12-01

    A new method for detecting microcalcifications in regions of interest (ROIs) extracted from digitized mammograms is proposed. The top-hat transform is a technique based on mathematical morphology operations and, in this paper, is used to perform contrast enhancement of the mi-crocalcifications. To improve microcalcification detection, a novel image sub-segmentation approach based on the possibilistic fuzzy c-means algorithm is used. From the original ROIs, window-based features, such as the mean and standard deviation, were extracted; these features were used as an input vector in a classifier. The classifier is based on an artificial neural network to identify patterns belonging to microcalcifications and healthy tissue. Our results show that the proposed method is a good alternative for automatically detecting microcalcifications, because this stage is an important part of early breast cancer detection.

  1. Exponential H ∞ Synchronization of Chaotic Cryptosystems Using an Improved Genetic Algorithm.

    PubMed

    Hsiao, Feng-Hsiag

    2015-01-01

    This paper presents a systematic design methodology for neural-network- (NN-) based secure communications in multiple time-delay chaotic (MTDC) systems with optimal H ∞ performance and cryptography. On the basis of the Improved Genetic Algorithm (IGA), which is demonstrated to have better performance than that of a traditional GA, a model-based fuzzy controller is then synthesized to stabilize the MTDC systems. A fuzzy controller is synthesized to not only realize the exponential synchronization, but also achieve optimal H ∞ performance by minimizing the disturbance attenuation level. Furthermore, the error of the recovered message is stated by using the n-shift cipher and key. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach. PMID:26366432

  2. Develop algorithms to improve detectability of defects in Sonic IR imaging NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2016-02-01

    Sonic Infrared (IR) technology is relative new in the NDE family. It is a fast, wide area imaging method. It combines ultrasound excitation and infrared imaging while the former to apply ultrasound energy thus induce friction heating in defects and the latter to capture the IR emission from the target. This technology can detect both surface and subsurface defects such as cracks and disbands/delaminations in various materials, metal/metal alloy or composites. However, certain defects may results in only very small IR signature be buried in noise or heating patterns. In such cases, to effectively extract the defect signals becomes critical in identifying the defects. In this paper, we will present algorithms which are developed to improve the detectability of defects in Sonic IR.

  3. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  4. Improvements of COMS Land Surface Temperature Retrieval Algorithm by considering diurnal variations of boundary layer temperature

    NASA Astrophysics Data System (ADS)

    Choi, Y. Y.; Suh, M. S.

    2015-12-01

    National Meteorological Satellite Centre in Republic of Korea retrieves operationally land surface temperature (LST) by applying the split-window LST algorithm (CSW_v1.0) from Communication, Ocean, and Meteorological Satellite (COMS) data. In order to improve COMS LST accuracy, Cho et al. (2015) developed six types of LST retrieval equations (CSW_v2.0) by considering temperature lapse rate and water vapor/aerosol effect. Similar to CSW_v1.0, the LST retrieved by CSW_v2.0 had a correlation coefficient of 0.99 with the prescribed LST and the root mean square error (RMSE) improved from 1.41 K to 1.39 K. However, CSW_v2.0 showed relatively poor performance, in particular, the temperature lapse rate is certainly large (superadiabatic cases during daytime or strong inversion cases during early morning). In this study, we upgraded the CSW_v2.0 by considering diurnal variations of boundary layer temperature to reduce the relatively large errors under the large lapse rate conditions. To achieve the goals, the diurnal variations of air temperature along with the land surface temperature are included during radiative transfer simulations for the generation of the pseudo-match-up database. The preliminary analysis results showed that RMSE and bias are reduced from 1.39K to 1.14K and from -0.03K to -0.01K. In this presentation, we will show the detailed results of LST retrieval using new algorithms according to the viewing geometry, temperature lapse rate condition, and water vapour amount along with the intercomparison results with MODIS LST data.

  5. Improving the Response of a Wheel Speed Sensor by Using a RLS Lattice Algorithm

    PubMed Central

    Hernandez, Wilmar

    2006-01-01

    Among the complete family of sensors for automotive safety, consumer and industrial application, speed sensors stand out as one of the most important. Actually, speed sensors have the diversity to be used in a broad range of applications. In today's automotive industry, such sensors are used in the antilock braking system, the traction control system and the electronic stability program. Also, typical applications are cam and crank shaft position/speed and wheel and turbo shaft speed measurement. In addition, they are used to control a variety of functions, including fuel injection, ignition timing in engines, and so on. However, some types of speed sensors cannot respond to very low speeds for different reasons. What is more, the main reason why such sensors are not good at detecting very low speeds is that they are more susceptible to noise when the speed of the target is low. In short, they suffer from noise and generally only work at medium to high speeds. This is one of the drawbacks of the inductive (magnetic reluctance) speed sensors and is the case under study. Furthermore, there are other speed sensors like the differential Hall Effect sensors that are relatively immune to interference and noise, but they cannot detect static fields. This limits their operations to speeds which give a switching frequency greater than a minimum operating frequency. In short, this research is focused on improving the performance of a variable reluctance speed sensor placed in a car under performance tests by using a recursive least-squares (RLS) lattice algorithm. Such an algorithm is situated in an adaptive noise canceller and carries out an optimal estimation of the relevant signal coming from the sensor, which is buried in a broad-band noise background where we have little knowledge of the noise characteristics. The experimental results are satisfactory and show a significant improvement in the signal-to-noise ratio at the system output.

  6. Improving a DWT-based compression algorithm for high image-quality requirement of satellite images

    NASA Astrophysics Data System (ADS)

    Thiebaut, Carole; Latry, Christophe; Camarero, Roberto; Cazanave, Grégory

    2011-10-01

    Past and current optical Earth observation systems designed by CNES are using a fixed-rate data compression processing performed at a high-rate in a pushbroom mode (also called scan-based mode). This process generates fixed-length data to the mass memory and data downlink is performed at a fixed rate too. Because of on-board memory limitations and high data rate processing needs, the rate allocation procedure is performed over a small image area called a "segment". For both PLEIADES compression algorithm and CCSDS Image Data Compression recommendation, this rate allocation is realised by truncating to the desired rate a hierarchical bitstream of coded and quantized wavelet coefficients for each segment. Because the quantisation induced by truncation of the bit planes description is the same for the whole segment, some parts of the segment have a poor image quality. These artefacts generally occur in low energy areas within a segment of higher level of energy. In order to locally correct these areas, CNES has studied "exceptional processing" targeted for DWT-based compression algorithms. According to a criteria computed for each part of the segment (called block), the wavelet coefficients can be amplified before bit-plane encoding. As usual Region of Interest handling, these multiplied coefficients will be processed earlier by the encoder than in the nominal case (without exceptional processing). The image quality improvement brought by the exceptional processing has been confirmed by visual image analysis and fidelity criteria. The complexity of the proposed improvement for on-board application has also been analysed.

  7. Synthesis of substantially monodispersed colloids

    NASA Technical Reports Server (NTRS)

    Klabunde, Kenneth J. (Inventor); Stoeva, Savka (Inventor); Sorensen, Christopher (Inventor)

    2003-01-01

    A method of forming ligated nanoparticles of the formula Y(Z).sub.x where Y is a nanoparticle selected from the group consisting of elemental metals having atomic numbers ranging from 21-34, 39-52, 57-83 and 89-102, all inclusive, the halides, oxides and sulfides of such metals, and the alkali metal and alkaline earth metal halides, and Z represents ligand moieties such as the alkyl thiols. In the method, a first colloidal dispersion is formed made up of nanoparticles solvated in a molar excess of a first solvent (preferably a ketone such as acetone), a second solvent different than the first solvent (preferably an organic aryl solvent such as toluene) and a quantity of ligand moieties; the first solvent is then removed under vacuum and the ligand moieties ligate to the nanoparticles to give a second colloidal dispersion of the ligated nanoparticles solvated in the second solvent. If substantially monodispersed nanoparticles are desired, the second dispersion is subjected to a digestive ripening process. Upon drying, the ligated nanoparticles may form a three-dimensional superlattice structure.

  8. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-01-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM). PMID:27258285

  9. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-01-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM). PMID:27258285

  10. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-06-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM).

  11. Improving word recognition in noise among hearing-impaired subjects with a single-channel cochlear noise-reduction algorithm.

    PubMed

    Fink, Nir; Furst, Miriam; Muchnik, Chava

    2012-09-01

    A common complaint of the hearing impaired is the inability to understand speech in noisy environments even with their hearing assistive devices. Only a few single-channel algorithms have significantly improved speech intelligibility in noise for hearing-impaired listeners. The current study introduces a cochlear noise reduction algorithm. It is based on a cochlear representation of acoustic signals and real-time derivation of a binary speech mask. The contribution of the algorithm for enhancing word recognition in noise was evaluated on a group of 42 normal-hearing subjects, 35 hearing-aid users, 8 cochlear implant recipients, and 14 participants with bimodal devices. Recognition scores of Hebrew monosyllabic words embedded in Gaussian noise at several signal-to-noise ratios (SNRs) were obtained with processed and unprocessed signals. The algorithm was not effective among the normal-hearing participants. However, it yielded a significant improvement in some of the hearing-impaired subjects under different listening conditions. Its most impressive benefit appeared among cochlear implant recipients. More than 20% improvement in recognition score of noisy words was obtained by 12, 16, and 26 hearing-impaired at SNR of 30, 24, and 18 dB, respectively. The algorithm has a potential to improve speech intelligibility in background noise, yet further research is required to improve its performances.

  12. High Resolution Direction of Arrival (DOA) Estimation Based on Improved Orthogonal Matching Pursuit (OMP) Algorithm by Iterative Local Searching

    PubMed Central

    Wang, Wenyi; Wu, Renbiao

    2013-01-01

    DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm. PMID:23974150

  13. High resolution direction of arrival (DOA) estimation based on improved orthogonal matching pursuit (OMP) algorithm by iterative local searching.

    PubMed

    Wang, Wenyi; Wu, Renbiao

    2013-01-01

    DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm. PMID:23974150

  14. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    PubMed

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  15. Delineating complex spatiotemporal distribution of earthquake aftershocks: an improved Source-Scanning Algorithm

    NASA Astrophysics Data System (ADS)

    Liao, Yen-Che; Kao, Honn; Rosenberger, Andreas; Hsu, Shu-Kun; Huang, Bor-Shouh

    2012-06-01

    Conventional earthquake location methods depend critically on the correct identification of seismic phases and their arrival times from seismograms. Accurate phase picking is particularly difficult for aftershocks that occur closely in time and space, mostly because of the ambiguity of correlating the same phase at different stations. In this study, we introduce an improved Source-Scanning Algorithm (ISSA) for the purpose of delineating the complex distribution of aftershocks without time-consuming and labour-intensive phase-picking procedures. The improvements include the application of a ground motion analyser to separate P and S waves, the automatic adjustment of time windows for 'brightness' calculation based on the scanning resolution and a modified brightness function to combine constraints from multiple phases. Synthetic experiments simulating a challenging scenario are conducted to demonstrate the robustness of the ISSA. The method is applied to a field data set selected from the ocean-bottom-seismograph records of an offshore aftershock sequence southwest of Taiwan. Although visual inspection of the seismograms is ambiguous, our ISSA analysis clearly delineates two events that can best explain the observed waveform pattern.

  16. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    PubMed

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-06-18

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving.

  17. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    PubMed Central

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  18. An improved algorithm for tracking multiple, freely moving particles in a Positron Emission Particle Tracking system

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Fryer, P. J.; Bakalis, S.; Fan, X.; Parker, D. J.; Seville, J. P. K.

    2007-07-01

    Positron Emission Particle Tracking (PEPT) is a powerful technique and capable of following a single tracer accurately and non-invasively in flow and mixing processes. It has been recently extended to observe the rotation of a large particle via tracking three small positron-emitting tracers mounted, with fixed separation distances, on the surface. The Multiple-Positron Emission Particle Tracking technique has been successfully used to study the rotational and translational behaviours of a large particle in a multiphase flow; however, it was not capable of following multiple freely moving particles. This paper presents an improved Multiple-Positron Emission Particle Tracking technique that is able to track more than one particle without constraint in separation distance between the particles. It consists of an improved algorithm for location calculation, particle identification and time reconstruction. The information obtained can be used to understand the interactions and relative motions of particles with different sizes, densities and material textures in multiphase systems, and is particularly useful in pharmaceutical, chemical and metallurgical engineering studies.

  19. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  20. An improved hurricane wind vector retrieval algorithm using SeaWinds scatterometer

    NASA Astrophysics Data System (ADS)

    Laupattarakasem, Peth

    Over the last three decades, microwave remote sensing has played a significant role in ocean surface wind measurement, and several scatterometer missions have flown in space since early 1990's. Although they have been extremely successful for measuring ocean surface winds with high accuracy for the vast majority of marine weather conditions, unfortunately, the conventional scatterometer cannot measure extreme winds condition such as hurricane. The SeaWinds scatterometer, onboard the QuikSCAT satellite is NASA's only operating scatterometer at present. Like its predecessors, it measures global ocean vector winds; however, for a number of reasons, the quality of the measurements in hurricanes are significantly degraded. The most pressing issues are associated with the presence of precipitation and Ku-band saturation effects, especially in extreme wind speed regime such as tropical cyclones (hurricanes and typhoons). Under this dissertation, an improved hurricane ocean vector wind retrieval approach, named as Q-Winds, was developed using existing SeaWinds scatterometer data. This unique data processing algorithm uses combined SeaWinds active and passive measurements to extend the use of SeaWinds for tropical cyclones up to approximately 50 m/s (Hurricane Category-3). Results show that Q-Winds wind speeds are consistently superior to the standard SeaWinds Project Level 2B wind speeds for hurricane wind speed measurement, and also Q-Winds provides more reliable rain flagging algorithm for quality assurance purposes. By comparing to H*Wind, Q-Winds achieves ˜9% of error, while L2B-12.5km exhibits wind speed saturation at ˜30 m/s with error of ˜31% for high wind speed (>40 m/s).

  1. Improved Limb Atmospheric Spectrometer (ILAS) data retrieval algorithm for Version 5.20 gas profile products

    NASA Astrophysics Data System (ADS)

    Yokota, T.; Nakajima, H.; Sugita, T.; Tsubaki, H.; Itou, Y.; Kaji, M.; Suzuki, M.; Kanzawa, H.; Park, J. H.; Sasano, Y.

    2002-12-01

    The Improved Limb Atmospheric Spectrometer (ILAS), a sensor for stratospheric ozone layer observation using a solar occultation technique, was mounted on the Advanced Earth Observing Satellite (ADEOS), which was put into a Sun-synchronous polar orbit in August 1996. Operational measurements were recorded over high-latitude regions from November 1996 to June 1997. This paper describes the data processing algorithm of Version 5.20 used to retrieve vertical profiles of gases such as ozone, nitric acid, nitrogen dioxide, nitrous oxide, methane, and water vapor from the infrared spectral measurements of ILAS. To simultaneously derive mixing ratios of individual gas species as a function of altitude, the nonlinear least squares method was utilized for spectral fitting, and the onion peeling method was applied to perform vertical profiling. This paper also discusses in detail estimation of errors (internal and external errors) associated with the derived gas profiles and compares the errors with repeatability. The internal error estimated from residuals in spectral fitting was generally larger than the repeatability, which suggests either that some unknown factors have not been incorporated into the forward model for simulating observed transmittance data or that some parameters in the model are inaccurate. The external error was almost comparable in magnitude to the repeatability. Numerical simulations were carried out to investigate performance of the nongaseous correction technique. The results showed that the background level of sulfuric acid aerosols has little effect on the retrieved profiles, while polar stratospheric clouds (PSCs) with extinction coefficients of the order of 10-3 km-1 at a wavelength of 780 nm have nonnegligible effects on the profiles of some gas species. Despite the problems that require further investigations, it is shown that the ILAS Version 5.20 algorithm generates scientifically useful products.

  2. An improved method of early diagnosis of smoking-induced respiratory changes using machine learning algorithms.

    PubMed

    Amaral, Jorge L M; Lopes, Agnaldo J; Jansen, José M; Faria, Alvaro C D; Melo, Pedro L

    2013-12-01

    The purpose of this study was to develop an automatic classifier to increase the accuracy of the forced oscillation technique (FOT) for diagnosing early respiratory abnormalities in smoking patients. The data consisted of FOT parameters obtained from 56 volunteers, 28 healthy and 28 smokers with low tobacco consumption. Many supervised learning techniques were investigated, including logistic linear classifiers, k nearest neighbor (KNN), neural networks and support vector machines (SVM). To evaluate performance, the ROC curve of the most accurate parameter was established as baseline. To determine the best input features and classifier parameters, we used genetic algorithms and a 10-fold cross-validation using the average area under the ROC curve (AUC). In the first experiment, the original FOT parameters were used as input. We observed a significant improvement in accuracy (KNN=0.89 and SVM=0.87) compared with the baseline (0.77). The second experiment performed a feature selection on the original FOT parameters. This selection did not cause any significant improvement in accuracy, but it was useful in identifying more adequate FOT parameters. In the third experiment, we performed a feature selection on the cross products of the FOT parameters. This selection resulted in a further increase in AUC (KNN=SVM=0.91), which allows for high diagnostic accuracy. In conclusion, machine learning classifiers can help identify early smoking-induced respiratory alterations. The use of FOT cross products and the search for the best features and classifier parameters can markedly improve the performance of machine learning classifiers. PMID:24001924

  3. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    PubMed

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.

  4. Temperature drift modeling and compensation of fiber optical gyroscope based on improved support vector machine and particle swarm optimization algorithms.

    PubMed

    Wang, Wei; Chen, Xiyuan

    2016-08-10

    Modeling and compensation of temperature drift is an important method for improving the precision of fiber-optic gyroscopes (FOGs). In this paper, a new method of modeling and compensation for FOGs based on improved particle swarm optimization (PSO) and support vector machine (SVM) algorithms is proposed. The convergence speed and reliability of PSO are improved by introducing a dynamic inertia factor. The regression accuracy of SVM is improved by introducing a combined kernel function with four parameters and piecewise regression with fixed steps. The steps are as follows. First, the parameters of the combined kernel functions are optimized by the improved PSO algorithm. Second, the proposed kernel function of SVM is used to carry out piecewise regression, and the regression model is also obtained. Third, the temperature drift is compensated for by the regression data. The regression accuracy of the proposed method (in the case of mean square percentage error indicators) increased by 83.81% compared to the traditional SVM.

  5. Multi-objective optimization in spatial planning: Improving the effectiveness of multi-objective evolutionary algorithms (non-dominated sorting genetic algorithm II)

    NASA Astrophysics Data System (ADS)

    Karakostas, Spiros

    2015-05-01

    The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.

  6. Evaluating some computer exhancement algorithms that improve the visibility of cometary morphology

    NASA Technical Reports Server (NTRS)

    Larson, Stephen M.; Slaughter, Charles D.

    1992-01-01

    Digital enhancement of cometary images is a necessary tool in studying cometary morphology. Many image processing algorithms, some developed specifically for comets, have been used to enhance the subtle, low contrast coma and tail features. We compare some of the most commonly used algorithms on two different images to evaluate their strong and weak points, and conclude that there currently exists no single 'ideal' algorithm, although the radial gradient spatial filter gives the best overall result. This comparison should aid users in selecting the best algorithm to enhance particular features of interest.

  7. Improved radar data processing algorithms for quantitative rainfall estimation in real time.

    PubMed

    Krämer, S; Verworn, H R

    2009-01-01

    This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time.

  8. Improvement of phase diversity algorithm for non-common path calibration in extreme AO context

    NASA Astrophysics Data System (ADS)

    Robert, Clélia; Fusco, Thierry; Sauvage, Jean-François; Mugnier, Laurent

    2008-07-01

    Exoplanet direct imaging with a ground-based telescope needs a very high performance adaptive optics (AO) system, so-called eXtreme AO (XAO), a coronagraph device, and a smart imaging process. One limitation of AO system in operation remains the Non Common Path Aberrations (NCPA). To achieve the ultimate XAO performance, these aberrations have to be measured with a dedicated wavefront sensor placed in the imaging camera focal plane, and then pre-compensated using the AO closed loop process. In any events, the pre-compensation should minimize the aberrations at the coronagraph focal plane mask. An efficient way for the NCPA measurement is the phase diversity technique. A pixel-wise approach is well-suited to estimate NCPA on large pupils and subsequent projection on the deformable mirror with Cartesian geometry. However it calls for a careful regularization for optimal results. The weight of the regularization is written in close-form for un-supervised tuning. The accuracy of NCPA pre-compensation is below 8 nm for a wide range of conditions. Point-by-point phase estimation improves the accuracy of the Phase Diversity method. The algorithm is validated in simulation and experimentally. It will be implemented in SAXO, the XAO system of the second generation VLT instrument: SPHERE.

  9. Improving chemical mapping algorithm and visualization in full-field hard x-ray spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong

    2013-12-01

    X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.

  10. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization

    PubMed Central

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate. PMID:26064085

  11. A procedure for the reliability improvement of the oblique ionograms automatic scaling algorithm

    NASA Astrophysics Data System (ADS)

    Ippolito, Alessandro; Scotto, Carlo; Sabbagh, Dario; Sgrigna, Vittorio; Maher, Phillip

    2016-05-01

    A procedure made by the combined use of the Oblique Ionogram Automatic Scaling Algorithm (OIASA) and Autoscala program is presented. Using Martyn's equivalent path theorem, 384 oblique soundings from a high-quality data set have been converted into vertical ionograms and analyzed by Autoscala program. The ionograms pertain to the radio link between Curtin W.A. (CUR) and Alice Springs N.T. (MTE), Australia, geographical coordinates (17.60°S; 123.82°E) and (23.52°S; 133.68°E), respectively. The critical frequency foF2 values extracted from the converted vertical ionograms by Autoscala were then compared with the foF2 values derived from the maximum usable frequencies (MUFs) provided by OIASA. A quality factor Q for the MUF values autoscaled by OIASA has been identified. Q represents the difference between the foF2 value scaled by Autoscala from the converted vertical ionogram and the foF2 value obtained applying the secant law to the MUF provided by OIASA. Using the receiver operating characteristic curve, an appropriate threshold level Qt was chosen for Q to improve the performance of OIASA.

  12. A neural-network-based exponential H∞ synchronisation for chaotic secure communication via improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Hsiao, Feng-Hsiag

    2016-10-01

    In this study, a novel approach via improved genetic algorithm (IGA)-based fuzzy observer is proposed to realise exponential optimal H∞ synchronisation and secure communication in multiple time-delay chaotic (MTDC) systems. First, an original message is inserted into the MTDC system. Then, a neural-network (NN) model is employed to approximate the MTDC system. Next, a linear differential inclusion (LDI) state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion derived in terms of Lyapunov's direct method, thus ensuring that the trajectories of the slave system approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI). Due to GA's random global optimisation search capabilities, the lower and upper bounds of the search space can be set so that the GA will seek better fuzzy observer feedback gains, accelerating feedback gain-based synchronisation via the LMI-based approach. IGA, which exhibits better performance than traditional GA, is used to synthesise a fuzzy observer to not only realise the exponential synchronisation, but also achieve optimal H∞ performance by minimizing the disturbance attenuation level and recovering the transmitted message. Finally, a numerical example with simulations is given in order to demonstrate the effectiveness of our approach.

  13. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559

  14. IMPROVEMENTS TO THE TIME STEPPING ALGORITHM OF RELAP5-3D

    SciTech Connect

    Cumberland, R.; Mesina, G.

    2009-01-01

    The RELAP5-3D time step method is used to perform thermo-hydraulic and neutronic simulations of nuclear reactors and other devices. It discretizes time and space by numerically solving several differential equations. Previously, time step size was controlled by halving or doubling the size of a previous time step. This process caused the code to run slower than it potentially could. In this research project, the RELAP5-3D time step method was modifi ed to allow a new method of changing time steps to improve execution speed and to control error. The new RELAP5-3D time step method being studied involves making the time step proportional to the material courant limit (MCL), while insuring that the time step does not increase by more than a factor of two between advancements. As before, if a step fails or mass error is excessive, the time step is cut in half. To examine performance of the new method, a measure of run time and a measure of error were plotted against a changing MCL proportionality constant (m) in seven test cases. The removal of the upper time step limit produced a small increase in error, but a large decrease in execution time. The best value of m was found to be 0.9. The new algorithm is capable of producing a signifi cant increase in execution speed, with a relatively small increase in mass error. The improvements made are now under consideration for inclusion as a special option in the RELAP5-3D production code.

  15. SU-E-I-82: Improving CT Image Quality for Radiation Therapy Using Iterative Reconstruction Algorithms and Slightly Increasing Imaging Doses

    SciTech Connect

    Noid, G; Chen, G; Tai, A; Li, X

    2014-06-01

    Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. For CT in radiation therapy (RT), slightly increasing imaging dose to improve IQ may be justified if it can substantially enhance structure delineation. The purpose of this study is to investigate and to quantify the IQ enhancement as a result of increasing imaging doses and using IR algorithms. Methods: CT images were acquired for phantoms, built to evaluate IQ metrics including spatial resolution, contrast and noise, with a variety of imaging protocols using a CT scanner (Definition AS Open, Siemens) installed inside a Linac room. Representative patients were scanned once the protocols were optimized. Both phantom and patient scans were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and the Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared. Results: IR techniques are demonstrated to preserve spatial resolution as measured by the point spread function and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is doubled by adopting the highest SAFIRE strength. As expected, increasing imaging dose reduces noise for both SAFIRE and FBP reconstructions. The contrast to noise increases from 3 to 5 by increasing the dose by a factor of 4. Similar IQ improvement was observed on the CTs for selected patients with pancreas and prostrate cancers. Conclusion: The IR techniques produce a measurable enhancement to CT IQ by reducing the noise. Increasing imaging dose further reduces noise independent of the IR techniques. The improved CT enables more accurate delineation of tumors and/or organs at risk during RT planning and delivery guidance.

  16. Improve the ranking algorithm of the GEO Discovery and Access Broker through resource accessibility assessment

    NASA Astrophysics Data System (ADS)

    Santoro, M.; Sorichetta, A.; Roglia, E.; Quaglia, A.; Craglia, M.; Nativi, S.

    2013-12-01

    The vision of the Global Earth Observation System of Systems (GEOSS) is the achievement of societal benefits through voluntary contribution and sharing of resources to better understand the relationships between the society and the environment where we live. To address complex issues in the field of geosciences a combined effort from many disciplines, ranging from physical to social sciences and including humanities, is required. The introduction of the Discovery and Access Broker (DAB) in the GEOSS Common Infrastructure (GCI) allowed to lower significantly the entry barriers for data users and producers, and thus to increase the order of magnitude of discoverable resources in the GCI, from hundreds of thousands to millions. This is a major step forward but from discovery to access, the road is still long! Either missing accessibility information in the metadata or broken links represent the major issue that prevents the real exploitation of the GCI resources. This is a remarkable problem for users attempting to exploit services and datasets obtained through a DAB query. This issue can be minimized providing the user with a ranked list of results that takes into account the real availability and accessibility of resources. We present in this work a methodology that overcomes the problem described above by improving the ranking algorithm, which is currently applied to the result set of a query to the DAB. The proposed methodology is based on the following steps: 1) Verify if information related to the accessibility of resources is described in the metadata provided by GEOSS contributors; 2) If accessibility information is provided, identify the type of resources (e.g. services, datasets) and produce modified and standardized accessibility information in a consistent manner; 3) Use standardized information to test accessibility and availability of resources using a probing approach; 4) Use the results returned in the ranking algorithm to assign the correct weight to

  17. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  18. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm

    PubMed Central

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  19. A new algorithm for improving the low contrast of computed tomography images using tuned brightness controlled single-scale Retinex.

    PubMed

    Al-Ameen, Zohair; Sulong, Ghazali

    2015-01-01

    Contrast is a distinctive visual attribute that indicates the quality of an image. Computed Tomography (CT) images are often characterized as poor quality due to their low-contrast nature. Although many innovative ideas have been proposed to overcome this problem, the outcomes, especially in terms of accuracy, visual quality and speed, are falling short and there remains considerable room for improvement. Therefore, an improved version of the single-scale Retinex algorithm is proposed to enhance the contrast while preserving the standard brightness and natural appearance, with low implementation time and without accentuating the noise for CT images. The novelties of the proposed algorithm consist of tuning the standard single-scale Retinex, adding a normalized-ameliorated Sigmoid function and adapting some parameters to improve its enhancement ability. The proposed algorithm is tested with synthetically and naturally degraded low-contrast CT images, and its performance is also verified with contemporary enhancement techniques using two prevalent quality evaluation metrics-SSIM and UIQI. The results obtained from intensive experiments exhibited significant improvement not only in enhancing the contrast but also in increasing the visual quality of the processed images. Finally, the proposed low-complexity algorithm provided satisfactory results with no apparent errors and outperformed all the comparative methods.

  20. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-08-05

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms.

  1. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  2. Vibration control of a flexible clamped-clamped plate based on an improved FULMS algorithm and laser displacement measurement

    NASA Astrophysics Data System (ADS)

    Xie, Lingbo; Qiu, Zhi-cheng; Zhang, Xian-min

    2016-06-01

    This paper presents a novel active resonant vibration control experiment of a flexible clamped-clamped plate using an improved filtered-U least mean square (FULMS) algorithm and laser displacement measurement. Different from the widely used PZT sensors or acceleration transducers, the vibration of the flexible clamped-clamped plate is measured by a non-contact laser displacement measurement sensor with higher measurement accuracy and without additional load to the plate. The conventional FULMS algorithm often uses fixed step size and needs reference signal related to the external disturbance signal. However, the fixed step size method cannot obtain a fast convergence speed and it will result in a low residual error. Thus, a variable step size method is investigated. In addition, it is difficult to extract reference signal related to the vibration source directly in the practical application. Therefore, it is practically useful that a reference signal is constructed by both the controller parameters and the vibration residual signal. The experimental results demonstrate that the improved FULMS algorithm has better vibration control effect than the proportional derivative (PD) feedback control algorithm and the fixed step-size control algorithm.

  3. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    NASA Astrophysics Data System (ADS)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  4. Improvements on the minimax algorithm for the Laplace transformation of orbital energy denominators

    NASA Astrophysics Data System (ADS)

    Helmich-Paris, Benjamin; Visscher, Lucas

    2016-09-01

    We present a robust and non-heuristic algorithm that finds all extremum points of the error distribution function of numerically Laplace-transformed orbital energy denominators. The extremum point search is one of the two key steps for finding the minimax approximation. If pre-tabulation of initial guesses is supposed to be avoided, strategies for a sufficiently robust algorithm have not been discussed so far. We compare our non-heuristic approach with a bracketing and bisection algorithm and demonstrate that 3 times less function evaluations are required altogether when applying it to typical non-relativistic and relativistic quantum chemical systems.

  5. Exponentially improved classical and quantum algorithms for three-body Ising models

    NASA Astrophysics Data System (ADS)

    Van den Nest, M.; Dür, W.

    2014-01-01

    We present an algorithm to approximate partition functions of three-body classical Ising models on two-dimensional lattices of arbitrary genus, in the real-temperature regime. Even though our algorithm is purely classical, it is designed by exploiting a connection to topological quantum systems, namely, the color codes. The algorithm performance (in achievable accuracy) is exponentially better than other approaches that employ mappings between partition functions and quantum state overlaps. In addition, our approach gives rise to a protocol for quantum simulation of such Ising models by simply measuring local observables on color codes.

  6. AsteroidZoo: A New Zooniverse project to detect asteroids and improve asteroid detection algorithms

    NASA Astrophysics Data System (ADS)

    Beasley, M.; Lewicki, C. A.; Smith, A.; Lintott, C.; Christensen, E.

    2013-12-01

    We present a new citizen science project: AsteroidZoo. A collaboration between Planetary Resources, Inc., the Zooniverse Team, and the Catalina Sky Survey, we will bring the science of asteroid identification to the citizen scientist. Volunteer astronomers have proved to be a critical asset in identification and characterization of asteroids, especially potentially hazardous objects. These contributions, to date, have required that the volunteer possess a moderate telescope and the ability and willingness to be responsive to observing requests. Our new project will use data collected by the Catalina Sky Survey (CSS), currently the most productive asteroid survey, to be used by anyone with sufficient interest and an internet connection. As previous work by the Zooniverse has demonstrated, the capability of the citizen scientist is superb at classification of objects. Even the best automated searches require human intervention to identify new objects. These searches are optimized to reduce false positive rates and to prevent a single operator from being overloaded with requests. With access to the large number of people in Zooniverse, we will be able to avoid that problem and instead work to produce a complete detection list. Each frame from CSS will be searched in detail, generating a large number of new detections. We will be able to evaluate the completeness of the CSS data set and potentially provide improvements to the automated pipeline. The data corpus produced by AsteroidZoo will be used as a training environment for machine learning challenges in the future. Our goals include a more complete asteroid detection algorithm and a minimum computation program that skims the cream of the data suitable for implemention on small spacecraft. Our goal is to have the site become live in the Fall 2013.

  7. Use of genetic algorithms to improve the solid waste collection service in an urban area.

    PubMed

    Buenrostro-Delgado, Otoniel; Ortega-Rodriguez, Juan Manuel; Clemitshaw, Kevin C; González-Razo, Carlos; Hernández-Paniagua, Iván Y

    2015-07-01

    Increasing generation of Urban Solid Waste (USW) has become a significant issue in developing countries due to unprecedented population growth and high rates of urbanisation. This issue has exceeded current plans and programs of local governments to manage and dispose of USW. In this study, a Genetic Algorithm for Rule-set Production (GARP) integrated into a Geographic Information System (GIS) was used to find areas with socio-economic conditions that are representative of the generation of USW constituents in such areas. Socio-economic data of selected variables categorised by Basic Geostatistical Areas (BGAs) were taken from the 2000 National Population Census (NPC). USW and additional socio-economic data were collected during two survey campaigns in 1998 and 2004. Areas for sampling of USW were stratified into lower, middle and upper economic strata according to income. Data on USW constituents were analysed using descriptive statistics and Multivariate Analysis. ARC View 3.2 was used to convert the USW data and socio-economic variables to spatial data. Desk-top GARP software was run to generate a spatial model to identify areas with similar socio-economic conditions to those sampled. Results showed that socio-economic variables such as monthly income and education are positively correlated with waste constituents generated. The GARP used in this study revealed BGAs with similar socio-economic conditions to those sampled, where a similar composition of waste constituents generated is expected. Our results may be useful to decrease USW management costs by improving the collection services. PMID:25869842

  8. Improved blood velocity measurements with a hybrid image filtering and iterative Radon transform algorithm

    PubMed Central

    Chhatbar, Pratik Y.; Kara, Prakash

    2013-01-01

    Neural activity leads to hemodynamic changes which can be detected by functional magnetic resonance imaging (fMRI). The determination of blood flow changes in individual vessels is an important aspect of understanding these hemodynamic signals. Blood flow can be calculated from the measurements of vessel diameter and blood velocity. When using line-scan imaging, the movement of blood in the vessel leads to streaks in space-time images, where streak angle is a function of the blood velocity. A variety of methods have been proposed to determine blood velocity from such space-time image sequences. Of these, the Radon transform is relatively easy to implement and has fast data processing. However, the precision of the velocity measurements is dependent on the number of Radon transforms performed, which creates a trade-off between the processing speed and measurement precision. In addition, factors like image contrast, imaging depth, image acquisition speed, and movement artifacts especially in large mammals, can potentially lead to data acquisition that results in erroneous velocity measurements. Here we show that pre-processing the data with a Sobel filter and iterative application of Radon transforms address these issues and provide more accurate blood velocity measurements. Improved signal quality of the image as a result of Sobel filtering increases the accuracy and the iterative Radon transform offers both increased precision and an order of magnitude faster implementation of velocity measurements. This algorithm does not use a priori knowledge of angle information and therefore is sensitive to sudden changes in blood flow. It can be applied on any set of space-time images with red blood cell (RBC) streaks, commonly acquired through line-scan imaging or reconstructed from full-frame, time-lapse images of the vasculature. PMID:23807877

  9. 29 CFR 1990.145 - Consideration of substantial new issues or substantial new evidence.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... substance any substantial new issues upon which the Secretary did not reach a conclusion in the rulemaking... 29 Labor 9 2010-07-01 2010-07-01 false Consideration of substantial new issues or substantial new... of substantial new issues or substantial new evidence. (a) Substantial new issues....

  10. 29 CFR 1990.145 - Consideration of substantial new issues or substantial new evidence.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... substance any substantial new issues upon which the Secretary did not reach a conclusion in the rulemaking... 29 Labor 9 2011-07-01 2011-07-01 false Consideration of substantial new issues or substantial new... of substantial new issues or substantial new evidence. (a) Substantial new issues....

  11. A Novel Optimization Technique to Improve Gas Recognition by Electronic Noses Based on the Enhanced Krill Herd Algorithm.

    PubMed

    Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan

    2016-01-01

    An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C₆H₆), toluene (C₇H₈), formaldehyde (CH₂O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms' applications in all E-nose application areas. PMID

  12. Improved motion contrast and processing efficiency in OCT angiography using complex-correlation algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Li; Li, Pei; Pan, Cong; Liao, Rujia; Cheng, Yuxuan; Hu, Weiwei; Chen, Zhong; Ding, Zhihua; Li, Peng

    2016-02-01

    The complex-based OCT angiography (Angio-OCT) offers high motion contrast by combining both the intensity and phase information. However, due to involuntary bulk tissue motions, complex-valued OCT raw data are processed sequentially with different algorithms for correcting bulk image shifts (BISs), compensating global phase fluctuations (GPFs) and extracting flow signals. Such a complicated procedure results in massive computational load. To mitigate such a problem, in this work, we present an inter-frame complex-correlation (CC) algorithm. The CC algorithm is suitable for parallel processing of both flow signal extraction and BIS correction, and it does not need GPF compensation. This method provides high processing efficiency and shows superiority in motion contrast. The feasibility and performance of the proposed CC algorithm is demonstrated using both flow phantom and live animal experiments.

  13. A hybrid heuristic algorithm to improve known-plaintext attack on Fourier plane encryption.

    PubMed

    Liu, Wensi; Yang, Guanglin; Xie, Haiyan

    2009-08-01

    A hybrid heuristic attack scheme that combines the hill climbing algorithm and the simulated annealing algorithm is proposed to speed up the search procedure and to obtain a more accurate solution to the original key in the Fourier plane encryption algorithm. And a unit cycle is adopted to analyze the value space of the random phase. The experimental result shows that our scheme can obtain more accurate solution to the key that can achieve better decryption result both for the selected encrypted image and another unseen ciphertext image. The searching time is significantly reduced while without any exceptional case in searching procedure. For an image of 64x64 pixels, our algorithm costs a comparatively short computing time, about 1 minute, can retrieve the approximated key with the normalized root mean squared error 0.1, therefore, our scheme makes the known-plaintext attack on the Fourier plane image encryption more practical, stable, and effective.

  14. An Evolutionary Algorithm for Improved Diversity in DSL Spectrum Balancing Solutions

    NASA Astrophysics Data System (ADS)

    Bezerra, Johelden; Klautau, Aldebaro; Monteiro, Marcio; Pelaes, Evaldo; Medeiros, Eduardo; Dortschy, Boris

    2010-12-01

    There are many spectrum balancing algorithms to combat the deleterious impact of crosstalk interference in digital subscriber lines (DSL) networks. These algorithms aim to find a unique operating point by optimizing the power spectral densities (PSDs) of the modems. Typically, the figure of merit of this optimization is the bit rate, power consumption or margin. This work poses and solves a different problem: instead of providing the solution for one specific operation point, it finds a set of operating points, each one corresponding to a distinct matrix with PSDs. This solution is useful for planning DSL deployment, for example, helping operators to conveniently evaluate their network capabilities and better plan their usage. The proposed method is based on a multiobjective formulation and implemented as an evolutionary genetic algorithm. Simulation results show that this algorithm achieves a better diversity among the operating points with lower computational cost when compared to an alternative approach.

  15. A new algorithm for evaluating 3D curvature and curvature gradient for improved fracture detection

    NASA Astrophysics Data System (ADS)

    Di, Haibin; Gao, Dengliang

    2014-09-01

    In 3D seismic interpretation, both curvature and curvature gradient are useful seismic attributes for structure characterization and fault detection in the subsurface. However, the existing algorithms are computationally intensive and limited by the lateral resolution for steeply-dipping formations. This study presents new and robust volume-based algorithms that evaluate both curvature and curvature gradient attributes more accurately and effectively. The algorithms first instantaneously fit a local surface to seismic data and then compute attributes using the spatial derivatives of the built surface. Specifically, the curvature algorithm constructs a quadratic surface by using a rectangle 9-node grid cell, whereas the curvature gradient algorithm builds a cubic surface by using a diamond 13-node grid cell. A dip-steering approach based on 3D complex seismic trace analysis is implemented to enhance the accuracy of surface construction and to reduce computational time. Applications to two 3D seismic surveys demonstrate the accuracy and efficiency of the new curvature and curvature gradient algorithms for characterizing faults and fractures in fractured reservoirs.

  16. Asymptotic analysis of online algorithms and improved scheme for the flow shop scheduling problem with release dates

    NASA Astrophysics Data System (ADS)

    Bai, Danyu

    2015-08-01

    This paper discusses the flow shop scheduling problem to minimise the total quadratic completion time (TQCT) with release dates in offline and online environments. For this NP-hard problem, the investigation is focused on the performance of two online algorithms based on the Shortest Processing Time among Available jobs rule. Theoretical results indicate the asymptotic optimality of the algorithms as the problem scale is sufficiently large. To further enhance the quality of the original solutions, the improvement scheme is provided for these algorithms. A new lower bound with performance guarantee is provided, and computational experiments show the effectiveness of these heuristics. Moreover, several results of the single-machine TQCT problem with release dates are also obtained for the deduction of the main theorem.

  17. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data

    PubMed Central

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  18. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    PubMed

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  19. BRAIN 2.0: Time and Memory Complexity Improvements in the Algorithm for Calculating the Isotope Distribution

    NASA Astrophysics Data System (ADS)

    Dittwald, Piotr; Valkenborg, Dirk

    2014-04-01

    Recently, an elegant iterative algorithm called BRAIN ( Baffling Recursive Algorithm for Isotopic distributio N calculations) was presented. The algorithm is based on the classic polynomial method for calculating aggregated isotope distributions, and it introduces algebraic identities using Newton-Girard and Viète's formulae to solve the problem of polynomial expansion. Due to the iterative nature of the BRAIN method, it is a requirement that the calculations start from the lightest isotope variant. As such, the complexity of BRAIN scales quadratically with the mass of the putative molecule, since it depends on the number of aggregated peaks that need to be calculated. In this manuscript, we suggest two improvements of the algorithm to decrease both time and memory complexity in obtaining the aggregated isotope distribution. We also illustrate a concept to represent the element isotope distribution in a generic manner. This representation allows for omitting the root calculation of the element polynomial required in the original BRAIN method. A generic formulation for the roots is of special interest for higher order element polynomials such that root finding algorithms and its inaccuracies can be avoided.

  20. Improving the quality of e-commerce web service: what is important for the request scheduling algorithm?

    NASA Astrophysics Data System (ADS)

    Suchacka, Grazyna

    2005-02-01

    The paper concerns a new research area that is Quality of Web Service (QoWS). The need for QoWS is motivated by a still growing number of Internet users, by a steady development and diversification of Web services, and especially by popularization of e-commerce applications. The goal of the paper is a critical analysis of the literature concerning scheduling algorithms for e-commerce Web servers. The paper characterizes factors affecting the load of the Web servers and discusses ways of improving their efficiency. Crucial QoWS requirements of the business Web server are identified: serving requests before their individual deadlines, supporting user session integrity, supporting different classes of users and minimizing a number of rejected requests. It is justified that meeting these requirements and implementing them in an admission control (AC) and scheduling algorithm for the business Web server is crucial to the functioning of e-commerce Web sites and revenue generated by them. The paper presents results of the literature analysis and discusses algorithms that implement these important QoWS requirements. The analysis showed that very few algorithms take into consideration the above mentioned factors and that there is a need for designing an algorithm implementing them.

  1. Humanizing murine IgG3 anti-GD2 antibody m3F8 substantially improves antibody-dependent cell-mediated cytotoxicity while retaining targeting in vivo

    PubMed Central

    Cheung, Nai-Kong V.; Guo, Hongfen; Hu, Jian; Tassev, Dimiter V.; Cheung, Irene Y.

    2012-01-01

    Murine IgG3 anti-GD2 antibody m3F8 has shown anti-neuroblastoma activity in Phase I/II studies, where antibody-dependent cell-mediated cytotoxicity (ADCC) played a key role. Humanization of m3F8 should circumvent human anti-mouse antibody (HAMA) response and enhance its ADCC properties to reduce dosing and pain side effect. Chimeric 3F8 (ch3F8) and humanized 3F8 (hu3F8-IgG1 and hu3F8-IgG4) were produced and purified by protein A affinity chromatography. In vitro comparison was made with m3F8 and other anti-GD2 antibodies in binding, cytotoxicity, and cross-reactivity assays. In GD2 binding studies by SPR, ch3F8 and hu3F8 maintained KD comparable to m3F8. Unlike other anti-GD2 antibodies, m3F8, ch3F8 and hu3F8 had substantially slower koff.. Similar to m3F8, both ch3F8 and hu3F8 inhibited tumor cell growth in vitro, while cross-reactivity with other gangliosides was comparable to that of m3F8. Both peripheral blood mononuclear cell (PBMC)-ADCC and polymorphonuclear leukocytes (PMN)-ADCC of ch3F8 and hu3F8-IgG1 were more potent than m3F8. This superiority was consistently observed in ADCC assays, irrespective of donors or NK-92MI-transfected human CD16 or CD32, whereas complement mediated cytotoxicity (CMC) was reduced. As expected, hu3F8-IgG4 had near absent PBMC-ADCC and CMC. Hu3F8 and m3F8 had similar tumor-to-non tumor ratios in biodistribution studies. Anti-tumor effect against neuroblastoma xenografts was better with hu3F8-IgG1 than m3F8. In conclusion, humanizing m3F8 produced next generation anti-GD2 antibodies with substantially more potent ADCC in vitro and anti-tumor activity in vivo. By leveraging ADCC over CMC, they may be clinically more effective, while minimizing pain and HAMA side effects. A Phase I trial using hu3F8-IgG1 is ongoing. PMID:22754766

  2. Improved algorithms for the classification of rough rice using a bionic electronic nose based on PCA and the Wilks distribution.

    PubMed

    Xu, Sai; Zhou, Zhiyan; Lu, Huazhong; Luo, Xiwen; Lan, Yubin

    2014-03-19

    Principal Component Analysis (PCA) is one of the main methods used for electronic nose pattern recognition. However, poor classification performance is common in classification and recognition when using regular PCA. This paper aims to improve the classification performance of regular PCA based on the existing Wilks Λ-statistic (i.e., combined PCA with the Wilks distribution). The improved algorithms, which combine regular PCA with the Wilks Λ-statistic, were developed after analysing the functionality and defects of PCA. Verification tests were conducted using a PEN3 electronic nose. The collected samples consisted of the volatiles of six varieties of rough rice (Zhongxiang1, Xiangwan13, Yaopingxiang, WufengyouT025, Pin 36, and Youyou122), grown in same area and season. The first two principal components used as analysis vectors cannot perform the rough rice varieties classification task based on a regular PCA. Using the improved algorithms, which combine the regular PCA with the Wilks Λ-statistic, many different principal components were selected as analysis vectors. The set of data points of the Mahalanobis distance between each of the varieties of rough rice was selected to estimate the performance of the classification. The result illustrates that the rough rice varieties classification task is achieved well using the improved algorithm. A Probabilistic Neural Networks (PNN) was also established to test the effectiveness of the improved algorithms. The first two principal components (namely PC1 and PC2) and the first and fifth principal component (namely PC1 and PC5) were selected as the inputs of PNN for the classification of the six rough rice varieties. The results indicate that the classification accuracy based on the improved algorithm was improved by 6.67% compared to the results of the regular method. These results prove the effectiveness of using the Wilks Λ-statistic to improve the classification accuracy of the regular PCA approach. The results

  3. Network Intrusion Detection Based on a General Regression Neural Network Optimized by an Improved Artificial Immune Algorithm

    PubMed Central

    Wu, Jianfa; Peng, Dahao; Li, Zhuping; Zhao, Li; Ling, Huanzhang

    2015-01-01

    To effectively and accurately detect and classify network intrusion data, this paper introduces a general regression neural network (GRNN) based on the artificial immune algorithm with elitist strategies (AIAE). The elitist archive and elitist crossover were combined with the artificial immune algorithm (AIA) to produce the AIAE-GRNN algorithm, with the aim of improving its adaptivity and accuracy. In this paper, the mean square errors (MSEs) were considered the affinity function. The AIAE was used to optimize the smooth factors of the GRNN; then, the optimal smooth factor was solved and substituted into the trained GRNN. Thus, the intrusive data were classified. The paper selected a GRNN that was separately optimized using a genetic algorithm (GA), particle swarm optimization (PSO), and fuzzy C-mean clustering (FCM) to enable a comparison of these approaches. As shown in the results, the AIAE-GRNN achieves a higher classification accuracy than PSO-GRNN, but the running time of AIAE-GRNN is long, which was proved first. FCM and GA-GRNN were eliminated because of their deficiencies in terms of accuracy and convergence. To improve the running speed, the paper adopted principal component analysis (PCA) to reduce the dimensions of the intrusive data. With the reduction in dimensionality, the PCA-AIAE-GRNN decreases in accuracy less and has better convergence than the PCA-PSO-GRNN, and the running speed of the PCA-AIAE-GRNN was relatively improved. The experimental results show that the AIAE-GRNN has a higher robustness and accuracy than the other algorithms considered and can thus be used to classify the intrusive data. PMID:25807466

  4. Studying the Effect of Adaptive Momentum in Improving the Accuracy of Gradient Descent Back Propagation Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Rehman, Muhammad Zubair; Nawi, Nazri Mohd.

    Despite being widely used in the practical problems around the world, Gradient Descent Back-propagation algorithm comes with problems like slow convergence and convergence to local minima. Previous researchers have suggested certain modifications to improve the convergence in gradient Descent Back-propagation algorithm such as careful selection of input weights and biases, learning rate, momentum, network topology, activation function and value for 'gain' in the activation function. This research proposed an algorithm for improving the working performance of back-propagation algorithm which is 'Gradient Descent with Adaptive Momentum (GDAM)' by keeping the gain value fixed during all network trials. The performance of GDAM is compared with 'Gradient Descent with fixed Momentum (GDM)' and 'Gradient Descent Method with Adaptive Gain (GDM-AG)'. The learning rate is fixed to 0.4 and maximum epochs are set to 3000 while sigmoid activation function is used for the experimentation. The results show that GDAM is a better approach than previous methods with an accuracy ratio of 1.0 for classification problems like Wine Quality, Mushroom and Thyroid disease.

  5. Vision-based measurement for rotational speed by improving Lucas-Kanade template tracking algorithm.

    PubMed

    Guo, Jie; Zhu, Chang'an; Lu, Siliang; Zhang, Dashan; Zhang, Chunyu

    2016-09-01

    Rotational angle and speed are important parameters for condition monitoring and fault diagnosis of rotating machineries, and their measurement is useful in precision machining and early warning of faults. In this study, a novel vision-based measurement algorithm is proposed to complete this task. A high-speed camera is first used to capture the video of the rotational object. To extract the rotational angle, the template-based Lucas-Kanade algorithm is introduced to complete motion tracking by aligning the template image in the video sequence. Given the special case of nonplanar surface of the cylinder object, a nonlinear transformation is designed for modeling the rotation tracking. In spite of the unconventional and complex form, the transformation can realize angle extraction concisely with only one parameter. A simulation is then conducted to verify the tracking effect, and a practical tracking strategy is further proposed to track consecutively the video sequence. Based on the proposed algorithm, instantaneous rotational speed (IRS) can be measured accurately and efficiently. Finally, the effectiveness of the proposed algorithm is verified on a brushless direct current motor test rig through the comparison with results obtained by the microphone. Experimental results demonstrate that the proposed algorithm can extract accurately rotational angles and can measure IRS with the advantage of noncontact and effectiveness. PMID:27607300

  6. Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.

    1992-01-01

    An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.

  7. Parameters identification for photovoltaic module based on an improved artificial fish swarm algorithm.

    PubMed

    Han, Wei; Wang, Hong-Hua; Chen, Ling

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233

  8. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms

    DOE PAGES

    Wang, C. L.

    2016-05-17

    On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methodswere proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover,more » these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less

  9. Parameters identification for photovoltaic module based on an improved artificial fish swarm algorithm.

    PubMed

    Han, Wei; Wang, Hong-Hua; Chen, Ling

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision.

  10. Parameters Identification for Photovoltaic Module Based on an Improved Artificial Fish Swarm Algorithm

    PubMed Central

    Wang, Hong-Hua

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233

  11. Improvement of characteristic statistic algorithm and its application on equilibrium cycle reloading optimization

    SciTech Connect

    Hu, Y.; Liu, Z.; Shi, X.; Wang, B.

    2006-07-01

    A brief introduction of characteristic statistic algorithm (CSA) is given in the paper, which is a new global optimization algorithm to solve the problem of PWR in-core fuel management optimization. CSA is modified by the adoption of back propagation neural network and fast local adjustment. Then the modified CSA is applied to PWR Equilibrium Cycle Reloading Optimization, and the corresponding optimization code of CSA-DYW is developed. CSA-DYW is used to optimize the equilibrium cycle of 18 month reloading of Daya bay nuclear plant Unit 1 reactor. The results show that CSA-DYW has high efficiency and good global performance on PWR Equilibrium Cycle Reloading Optimization. (authors)

  12. An improved algorithm for polar cloud-base detection by ceilometer over the ice sheets

    NASA Astrophysics Data System (ADS)

    Van Tricht, K.; Gorodetskaya, I. V.; Lhermitte, S.; Turner, D. D.; Schween, J. H.; Van Lipzig, N. P. M.

    2014-05-01

    Optically thin ice and mixed-phase clouds play an important role in polar regions due to their effect on cloud radiative impact and precipitation. Cloud-base heights can be detected by ceilometers, low-power backscatter lidars that run continuously and therefore have the potential to provide basic cloud statistics including cloud frequency, base height and vertical structure. The standard cloud-base detection algorithms of ceilometers are designed to detect optically thick liquid-containing clouds, while the detection of thin ice clouds requires an alternative approach. This paper presents the polar threshold (PT) algorithm that was developed to be sensitive to optically thin hydrometeor layers (minimum optical depth τ ≥ 0.01). The PT algorithm detects the first hydrometeor layer in a vertical attenuated backscatter profile exceeding a predefined threshold in combination with noise reduction and averaging procedures. The optimal backscatter threshold of 3 × 10-4 km-1 sr-1 for cloud-base detection near the surface was derived based on a sensitivity analysis using data from Princess Elisabeth, Antarctica and Summit, Greenland. At higher altitudes where the average noise level is higher than the backscatter threshold, the PT algorithm becomes signal-to-noise ratio driven. The algorithm defines cloudy conditions as any atmospheric profile containing a hydrometeor layer at least 90 m thick. A comparison with relative humidity measurements from radiosondes at Summit illustrates the algorithm's ability to significantly discriminate between clear-sky and cloudy conditions. Analysis of the cloud statistics derived from the PT algorithm indicates a year-round monthly mean cloud cover fraction of 72% (±10%) at Summit without a seasonal cycle. The occurrence of optically thick layers, indicating the presence of supercooled liquid water droplets, shows a seasonal cycle at Summit with a monthly mean summer peak of 40 % (±4%). The monthly mean cloud occurrence frequency

  13. A Novel Optimization Technique to Improve Gas Recognition by Electronic Noses Based on the Enhanced Krill Herd Algorithm

    PubMed Central

    Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan

    2016-01-01

    An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C6H6), toluene (C7H8), formaldehyde (CH2O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms’ applications in all E-nose application areas. PMID

  14. Improving performance of computer-aided detection of pulmonary embolisms by incorporating a new pulmonary vascular-tree segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Song, XiaoFei; Chapman, Brian E.; Zheng, Bin

    2012-03-01

    We developed a new pulmonary vascular tree segmentation/extraction algorithm. The purpose of this study was to assess whether adding this new algorithm to our previously developed computer-aided detection (CAD) scheme of pulmonary embolism (PE) could improve the CAD performance (in particular reducing false positive detection rates). A dataset containing 12 CT examinations with 384 verified pulmonary embolism regions associated with 24 threedimensional (3-D) PE lesions was selected in this study. Our new CAD scheme includes the following image processing and feature classification steps. (1) A 3-D based region growing process followed by a rolling-ball algorithm was utilized to segment lung areas. (2) The complete pulmonary vascular trees were extracted by combining two approaches of using an intensity-based region growing to extract the larger vessels and a vessel enhancement filtering to extract the smaller vessel structures. (3) A toboggan algorithm was implemented to identify suspicious PE candidates in segmented lung or vessel area. (4) A three layer artificial neural network (ANN) with the topology 27-10-1 was developed to reduce false positive detections. (5) A k-nearest neighbor (KNN) classifier optimized by a genetic algorithm was used to compute detection scores for the PE candidates. (6) A grouping scoring method was designed to detect the final PE lesions in three dimensions. The study showed that integrating the pulmonary vascular tree extraction algorithm into the CAD scheme reduced false positive rates by 16.2%. For the case based 3D PE lesion detecting results, the integrated CAD scheme achieved 62.5% detection sensitivity with 17.1 false-positive lesions per examination.

  15. Experimental validation of improved 3D SBP positioning algorithm in PET applications using UW Phase II Board

    NASA Astrophysics Data System (ADS)

    Jorge, L. S.; Bonifacio, D. A. B.; DeWitt, Don; Miyaoka, R. S.

    2016-12-01

    Continuous scintillator-based detectors have been considered as a competitive and cheaper approach than highly pixelated discrete crystal positron emission tomography (PET) detectors, despite the need for algorithms to estimate 3D gamma interaction position. In this work, we report on the implementation of a positioning algorithm to estimate the 3D interaction position in a continuous crystal PET detector using a Field Programmable Gate Array (FPGA). The evaluated method is the Statistics-Based Processing (SBP) technique that requires light response function and event position characterization. An algorithm has been implemented using the Verilog language and evaluated using a data acquisition board that contains an Altera Stratix III FPGA. The 3D SBP algorithm was previously successfully implemented on a Stratix II FPGA using simulated data and a different module design. In this work, improvements were made to the FPGA coding of the 3D positioning algorithm, reducing the total memory usage to around 34%. Further the algorithm was evaluated using experimental data from a continuous miniature crystal element (cMiCE) detector module. Using our new implementation, average FWHM (Full Width at Half Maximum) for the whole block is 1.71±1 mm, 1.70±1 mm and 1.632±5 mm for x, y and z directions, respectively. Using a pipelined architecture, the FPGA is able to process 245,000 events per second for interactions inside of the central area of the detector that represents 64% of the total block area. The weighted average of the event rate by regional area (corner, border and central regions) is about 198,000 events per second. This event rate is greater than the maximum expected coincidence rate for any given detector module in future PET systems using the cMiCE detector design.

  16. A retrospective pilot study of the use of a new algorithm to improve quality control in bronchodilator studies.

    PubMed

    Earle, Charlotte L; Jefferies, Rhys

    2015-01-01

    Reversibility testing is used to identify a positive or negative response to bronchodilators. Results from a reversibility test can not only support a diagnosis of asthma but can alter a patient's treatment plan, so its clinical importance should not be understated. With multiple guidelines published classifying a 'positive response' it becomes unclear on how to categorise certain individuals. This study looks into the discrepancies between the guidelines, and introduces a new algorithm to help clinicians. This retrospective pilot study was completed across four hospitals in South Wales. Data were collected from a total of 117 patients referred for a reversibility study during November 2013 and April 2014. An algorithm was created to improve flow-volume loop (FVL) quality control when assessing airways bronchodilation in symptomatic patients. Each patient result was placed through four major reversibility guidelines [British Thoracic Society (BTS), National Institute for Clinical Excellence (NICE), Association for Respiratory Technology Physiologists (ARTP) and Global Lung Initiative (GLI)] and the new algorithm. When comparing published guidelines, 75% of patients would receive the same bronchodilator response decision, positive or negative, irrespective of the guideline followed. Variability between the numbers of positive responders in each guideline varied by up to 58%, with NICE found to give the least number of positive responses (7%), and BTS giving the greatest (65%). Using the new algorithm, over one third (38%) of patients required a repeat FVL, as baseline and/or post-bronchodilator FVLs did not meet the quality control specification. Further investigation is needed to establish the clinical impact of the new algorithm, and its approach to using the whole of the FVL in bronchodilator analysis; however, quality control during reversibility testing needs to be improved to ensure that bronchodilator responses are correctly identified. PMID:26557258

  17. A retrospective pilot study of the use of a new algorithm to improve quality control in bronchodilator studies.

    PubMed

    Earle, Charlotte L; Jefferies, Rhys

    2015-01-01

    Reversibility testing is used to identify a positive or negative response to bronchodilators. Results from a reversibility test can not only support a diagnosis of asthma but can alter a patient's treatment plan, so its clinical importance should not be understated. With multiple guidelines published classifying a 'positive response' it becomes unclear on how to categorise certain individuals. This study looks into the discrepancies between the guidelines, and introduces a new algorithm to help clinicians. This retrospective pilot study was completed across four hospitals in South Wales. Data were collected from a total of 117 patients referred for a reversibility study during November 2013 and April 2014. An algorithm was created to improve flow-volume loop (FVL) quality control when assessing airways bronchodilation in symptomatic patients. Each patient result was placed through four major reversibility guidelines [British Thoracic Society (BTS), National Institute for Clinical Excellence (NICE), Association for Respiratory Technology Physiologists (ARTP) and Global Lung Initiative (GLI)] and the new algorithm. When comparing published guidelines, 75% of patients would receive the same bronchodilator response decision, positive or negative, irrespective of the guideline followed. Variability between the numbers of positive responders in each guideline varied by up to 58%, with NICE found to give the least number of positive responses (7%), and BTS giving the greatest (65%). Using the new algorithm, over one third (38%) of patients required a repeat FVL, as baseline and/or post-bronchodilator FVLs did not meet the quality control specification. Further investigation is needed to establish the clinical impact of the new algorithm, and its approach to using the whole of the FVL in bronchodilator analysis; however, quality control during reversibility testing needs to be improved to ensure that bronchodilator responses are correctly identified.

  18. Improvement of Algorithms for Pressure Maintenance Systems in Drum-Separators of RBMK-1000 Reactors

    SciTech Connect

    Aleksakov, A. N. Yankovskiy, K. I.; Dunaev, V. I.; Kushbasov, A. N.

    2015-05-15

    The main tasks and challenges for pressure regulation in the drum-separators of RBMK-1000 reactors are described. New approaches to constructing algorithms for pressure control in drum-separators by electro-hydraulic turbine control systems are discussed. Results are provided from tests of the operation of modernized pressure regulators during fast transients with reductions in reactor power.

  19. Toward a practical ultrasound waveform tomography algorithm for improving breast imaging

    NASA Astrophysics Data System (ADS)

    Li, Cuiping; Sandhu, Gursharan S.; Roy, Olivier; Duric, Neb; Allada, Veerendra; Schmidt, Steven

    2014-03-01

    Ultrasound tomography is an emerging modality for breast imaging. However, most current ultrasonic tomography imaging algorithms, historically hindered by the limited memory and processor speed of computers, are based on ray theory and assume a homogeneous background which is inaccurate for complex heterogeneous regions. Therefore, wave theory, which accounts for diffraction effects, must be used in ultrasonic imaging algorithms to properly handle the heterogeneous nature of breast tissue in order to accurately image small lesions. However, application of waveform tomography to medical imaging has been limited by extreme computational cost and convergence. By taking advantage of the computational architecture of Graphic Processing Units (GPUs), the intensive processing burden of waveform tomography can be greatly alleviated. In this study, using breast imaging methods, we implement a frequency domain waveform tomography algorithm on GPUs with the goal of producing high-accuracy and high-resolution breast images on clinically relevant time scales. We present some simulation results and assess the resolution and accuracy of our waveform tomography algorithms based on the simulation data.

  20. Improved calibration of mass stopping power in low density tissue for a proton pencil beam algorithm.

    PubMed

    Warren, Daniel R; Partridge, Mike; Hill, Mark A; Peach, Ken

    2015-06-01

    Dose distributions for proton therapy treatments are almost exclusively calculated using pencil beam algorithms. An essential input to these algorithms is the patient model, derived from x-ray computed tomography (CT), which is used to estimate proton stopping power along the pencil beam paths. This study highlights a potential inaccuracy in the mapping between mass density and proton stopping power used by a clinical pencil beam algorithm in materials less dense than water. It proposes an alternative physically-motivated function (the mass average, or MA, formula) for use in this region. Comparisons are made between dose-depth curves calculated by the pencil beam method and those calculated by the Monte Carlo particle transport code MCNPX in a one-dimensional lung model. Proton range differences of up to 3% are observed between the methods, reduced to  <1% when using the MA function. The impact of these range errors on clinical dose distributions is demonstrated using treatment plans for a non-small cell lung cancer patient. The change in stopping power calculation methodology results in relatively minor differences in dose when plans use three fields, but differences are observed at the 2%-2 mm level when a single field uniform dose technique is adopted. It is therefore suggested that the MA formula is adopted by users of the pencil beam algorithm for optimal dose calculation in lung, and that a similar approach is considered when beams traverse other low density regions such as the paranasal sinuses and mastoid process.

  1. Identification of robust adaptation gene regulatory network parameters using an improved particle swarm optimization algorithm.

    PubMed

    Huang, X N; Ren, H P

    2016-01-01

    Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation. PMID:27323043

  2. Application of an improved subpixel registration algorithm on digital speckle correlation measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Jin, Guanchang; Ma, Shaopeng; Meng, Libo

    2003-10-01

    Digital speckle correlation method (DSCM) has been widely used in experimental mechanics to obtain the surface deformation fields. One of the challenges in practical applications is how to obtain the high accuracy with far less computation complexity. To determine the subpixel registration of the DSCM, a high efficient gradient-based algorithm is developed in this paper. The principle is described and four different modes of the algorithm are given. Based on computer-simulated images, the optimal mode of the algorithm is verified through the comparison of computation time, optimal subset-region size and sensitivity of the four modes. The influences of speckle-granule size and speckle-granule density on accuracy are studied and a quantitative estimation of the optimal speckle-granule size range is obtained. As the applications of this method, the practical deformation measurements with the rigid body translation and rotation as well as an experiment on biomechanics are presented to certify the feasibility and the validity of the algorithm.

  3. Improved calibration of mass stopping power in low density tissue for a proton pencil beam algorithm

    NASA Astrophysics Data System (ADS)

    Warren, Daniel R.; Partridge, Mike; Hill, Mark A.; Peach, Ken

    2015-06-01

    Dose distributions for proton therapy treatments are almost exclusively calculated using pencil beam algorithms. An essential input to these algorithms is the patient model, derived from x-ray computed tomography (CT), which is used to estimate proton stopping power along the pencil beam paths. This study highlights a potential inaccuracy in the mapping between mass density and proton stopping power used by a clinical pencil beam algorithm in materials less dense than water. It proposes an alternative physically-motivated function (the mass average, or MA, formula) for use in this region. Comparisons are made between dose-depth curves calculated by the pencil beam method and those calculated by the Monte Carlo particle transport code MCNPX in a one-dimensional lung model. Proton range differences of up to 3% are observed between the methods, reduced to  <1% when using the MA function. The impact of these range errors on clinical dose distributions is demonstrated using treatment plans for a non-small cell lung cancer patient. The change in stopping power calculation methodology results in relatively minor differences in dose when plans use three fields, but differences are observed at the 2%-2 mm level when a single field uniform dose technique is adopted. It is therefore suggested that the MA formula is adopted by users of the pencil beam algorithm for optimal dose calculation in lung, and that a similar approach is considered when beams traverse other low density regions such as the paranasal sinuses and mastoid process.

  4. An Improved Algorithm for the Calculation of Exact Term Discrimination Values.

    ERIC Educational Resources Information Center

    El-Hamdouchi, Abdelmoula; Willett, Peter

    1988-01-01

    Describes an algorithm for the calculation of term discrimination values that may be used when the interdocument similarity measure used is the cosine coefficient and when the document representations have been weighted using one particular term weighting scheme. (7 references) (Author/CLB)

  5. An improved algorithm for cloud base detection by ceilometer over the ice sheets

    NASA Astrophysics Data System (ADS)

    Van Tricht, K.; Gorodetskaya, I. V.; Lhermitte, S.; Turner, D. D.; Schween, J. H.; Van Lipzig, N. P. M.

    2013-11-01

    Optically thin ice clouds play an important role in polar regions due to their effect on cloud radiative impact and precipitation on the surface. Cloud bases can be detected by lidar-based ceilometers that run continuously and therefore have the potential to provide basic cloud statistics including cloud frequency, base height and vertical structure. Despite their importance, thin clouds are however not well detected by the standard cloud base detection algorithm of most ceilometers operational at Arctic and Antarctic stations. This paper presents the Polar Threshold (PT) algorithm that was developed to detect optically thin hydrometeor layers (optical depth τ ≥ 0.01). The PT algorithm detects the first hydrometeor layer in a vertical attenuated backscatter profile exceeding a predefined threshold in combination with noise reduction and averaging procedures. The optimal backscatter threshold of 3 × 10-4 km-1 sr-1 for cloud base detection was objectively derived based on a sensitivity analysis using data from Princess Elisabeth, Antarctica and Summit, Greenland. The algorithm defines cloudy conditions as any atmospheric profile containing a hydrometeor layer at least 50 m thick. A comparison with relative humidity measurements from radiosondes at Summit illustrates the algorithm's ability to significantly differentiate between clear sky and cloudy conditions. Analysis of the cloud statistics derived from the PT algorithm indicates a year-round monthly mean cloud cover fraction of 72% at Summit without a seasonal cycle. The occurrence of optically thick layers, indicating the presence of supercooled liquid, shows a seasonal cycle at Summit with a monthly mean summer peak of 40%. The monthly mean cloud occurrence frequency in summer at Princess Elisabeth is 47%, which reduces to 14% for supercooled liquid cloud layers. Our analyses furthermore illustrate the importance of optically thin hydrometeor layers located near the surface for both sites, with 87% of all

  6. Temperature drift modeling and compensation of fiber optical gyroscope based on improved support vector machine and particle swarm optimization algorithms.

    PubMed

    Wang, Wei; Chen, Xiyuan

    2016-08-10

    Modeling and compensation of temperature drift is an important method for improving the precision of fiber-optic gyroscopes (FOGs). In this paper, a new method of modeling and compensation for FOGs based on improved particle swarm optimization (PSO) and support vector machine (SVM) algorithms is proposed. The convergence speed and reliability of PSO are improved by introducing a dynamic inertia factor. The regression accuracy of SVM is improved by introducing a combined kernel function with four parameters and piecewise regression with fixed steps. The steps are as follows. First, the parameters of the combined kernel functions are optimized by the improved PSO algorithm. Second, the proposed kernel function of SVM is used to carry out piecewise regression, and the regression model is also obtained. Third, the temperature drift is compensated for by the regression data. The regression accuracy of the proposed method (in the case of mean square percentage error indicators) increased by 83.81% compared to the traditional SVM. PMID:27534465

  7. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Sen, Ramazan Sonat; Smith, Curtis Lee

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  8. An Improved Polarimetric Radar Rainfall Algorithm With Hydrometeor Classification Optimized For Rainfall Estimation

    NASA Astrophysics Data System (ADS)

    Cifelli, R.; Wang, Y.; Lim, S.; Kennedy, P.; Chandrasekar, V.; Rutledge, S. A.

    2009-05-01

    The efficacy of dual polarimetric radar for quantitative precipitation estimation (QPE) is firmly established. Specifically, rainfall retrievals using combinations of reflectivity (ZH), differential reflectivity (ZDR), and specific differential phase (KDP) have advantages over traditional Z-R methods because more information about the drop size distribution and hydrometeor type are available. In addition, dual-polarization radar measurements are generally less susceptible to error and biases due to the presence of ice in the sampling volume. A number of methods have been developed to estimate rainfall from dual-polarization radar measurements. However, the robustness of these techniques in different precipitation regimes is unknown. Because the National Weather Service (NWS) will soon upgrade the WSR 88-D radar network to dual-polarization capability, it is important to test retrieval algorithms in different meteorological environments in order to better understand the limitations of the different methodologies. An important issue in dual-polarimetric rainfall estimation is determining which method to employ for a given set of polarimetric observables. For example, under what circumstances does differential phase information provide superior rain estimates relative to methods using reflectivity and differential reflectivity? At Colorado State University (CSU), a "blended" algorithm has been developed and used for a number of years to estimate rainfall based on ZH, ZDR, and KDP (Cifelli et al. 2002). The rainfall estimators for each sampling volume are chosen on the basis of fixed thresholds, which maximize the measurement capability of each polarimetric variable and combinations of variables. Tests have shown, however, that the retrieval is sensitive to the calculation of ice fraction in the radar volume via the difference reflectivity (ZDP - Golestani et al. 1989) methodology such that an inappropriate estimator can be selected in situations where radar echo is

  9. Speed improvement of B-snake algorithm using dynamic programming optimization.

    PubMed

    Charfi, Maher; Zrida, Jalel

    2011-10-01

    This paper presents a novel approach to contour approximation carried out by means of the B-snake algorithm and the dynamic programming (DP) optimization technique. Using the proposed strategy for contour point search procedure, computing complexity is reduced to O(N×M(2)), whereas the standard DP method has an O(N×M(4)) complexity, with N being the number of contour sample points and M being the number of candidates in the search space. The storage requirement was also decreased from N×M(3) to N×M memory elements. Some experiments on noise corrupted synthetic image, magnetic resonance, and computer tomography medical images have shown that the proposed approach results are equivalent to those obtained by the standard DP algorithm.

  10. Assessment and improvement of mapping algorithms for non-matching meshes and geometries in computational FSI

    NASA Astrophysics Data System (ADS)

    Wang, Tianyang; Wüchner, Roland; Sicklinger, Stefan; Bletzinger, Kai-Uwe

    2016-05-01

    This paper investigates data mapping between non-matching meshes and geometries in fluid-structure interaction. Mapping algorithms for surface meshes including nearest element interpolation, the standard mortar method and the dual mortar method are studied and comparatively assessed. The inconsistency problem of mortar methods at curved edges of fluid-structure-interfaces is solved by a newly developed enforcing consistency approach, which is robust enough to handle even the case that fluid boundary facets are totally not in contact with structure boundary elements due to high fluid refinement. Besides, tests with representative geometries show that the mortar methods are suitable for conservative mapping but it is better to use the nearest element interpolation in a direct way, and moreover, the dual mortar method can give slight oscillations. This work also develops a co-rotating mapping algorithm for 1D beam elements. Its novelty lies in the ability of handling large displacements and rotations.

  11. Blog Classification: Adding Linguistic Knowledge to Improve the K-NN Algorithm

    NASA Astrophysics Data System (ADS)

    Bayoudh, Ines; Bechet, Nicolas; Roche, Mathieu

    Blogs are interactive and regularly updated websites which can be seen as diaries. These websites are composed by articles based on distinct topics. Thus, it is necessary to develop Information Retrieval approaches for this new web knowledge. The first important step of this process is the categorization of the articles. The paper above compares several methods using linguistic knowledge with k-NN algorithm for automatic categorization of weblogs articles.

  12. Improving the quantitative testing of fast aspherics surfaces with null screen using Dijkstra algorithm

    NASA Astrophysics Data System (ADS)

    Moreno Oliva, Víctor Iván; Castañeda Mendoza, Álvaro; Campos García, Manuel; Díaz Uribe, Rufino

    2011-09-01

    The null screen is a geometric method that allows the testing of fast aspherical surfaces, this method measured the local slope at the surface and by numerical integration the shape of the surface is measured. The usual technique for the numerical evaluation of the surface is the trapezoidal rule, is well-known fact that the truncation error increases with the second power of the spacing between spots of the integration path. Those paths are constructed following spots reflected on the surface and starting in an initial select spot. To reduce the numerical errors in this work we propose the use of the Dijkstra algorithm.1 This algorithm can find the shortest path from one spot (or vertex) to another spot in a weighted connex graph. Using a modification of the algorithm it is possible to find the minimal path from one select spot to all others ones. This automates and simplifies the integration process in the test with null screens. In this work is shown the efficient proposed evaluating a previously surface with a traditional process.

  13. Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven

    2010-05-01

    Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.

  14. An improved spatial tracking algorithm applied to coronary veins into Cardiac Multi-Slice Computed Tomography volume

    PubMed Central

    Garcia, Marie-Paule; Toumoulin, Christine; Garreau, Mireille; Kulik, Carine; Boulmier, Dominique; Leclercq, Christophe

    2008-01-01

    This paper describes an enhanced vessel tracking algorithm. The method specifity relies on the coronary venous tree extraction through Cardiac Multi-Slice Computed Tomography (MSCT). Indeed, contrast inhomogeneities are a major issue in the data sets that necessit a robust tracking procedure. The method is based on an existing moment-based algorithm designed for coronary arteries into MSCT volume. In order to extract the whole path of interest, improvements concerning progression strategy are proposed. Furthermore, the original procedure is combinated with an automatic recentring method based on ray casting. This enhanced method has been tested on three data sets. According to the first results, the method appears robust to curvatures, contrast inhomogeneities and low contrast blood veins. PMID:19163593

  15. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    PubMed

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. PMID:26767800

  16. Flexible Filter Bank Based on an Improved Weighted Overlap-Add Algorithm for Processing Wide Bandwidth Radio Astronomy Signals

    NASA Astrophysics Data System (ADS)

    Wang, Xianhai; Meng, Qiao; Han, J. L.; Liu, Wei; Zhang, Jianwei

    2015-12-01

    Wideband signals from a radio telescope have to be channelized for spectral observations or for dedispersion for pulsar observations. A polyphase filter bank is designed based on the improved weighted overlap-add (IWOLA) algorithm to achieve channelization. The IWOLA algorithm involves applying an equivalent Hilbert transform to the normal WOLA filter bank by shifting the center frequency of every sub-band by a half of the frequency bin, so that the IWOLA filter bank provides K independently output complex subbands instead of the usual K + 1 sub-bands, reducing the subsequent processing units by one set. Performance of the proposed IWOLA filter bank is analyzed by means of MATLAB simulations. We show how the IWOLA filter bank can be used for a two-stage, high-resolution spectrometer, with a much reduced consumption of FPGA on-chip block RAM.

  17. Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit

    SciTech Connect

    Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph

    2015-09-01

    The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.

  18. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm.

    PubMed

    He, Hongxing; Fang, Hengrui; Miller, Mitchell D; Phillips, George N; Su, Wu Pei

    2016-09-01

    An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationship of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed. PMID:27580202

  19. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm

    PubMed Central

    He, Hongxing; Fang, Hengrui; Miller, Mitchell D.; Phillips, George N.; Su, Wu-Pei

    2016-01-01

    An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationship of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed. PMID:27580202

  20. An improved response surface methodology algorithm with an application to traffic signal optimization for urban networks

    SciTech Connect

    Joshi, S.S.; Rathi, A.K.; Tew, J.D.

    1995-12-31

    This paper illustrates the use of the simulation-optimization technique of response surface methodology (RSM) in traffic signal optimization of urban networks. It also quantifies the gains of using the common random number (CRN) variance reduction strategy in such an optimization procedure. An enhanced RSM algorithm which employs conjugate gradient search techniques and successive second-order models is presented instead of the conventional approach. An illustrative example using an urban traffic network exhibits the superiority of using the CRN strategy ovr direct simulation in performing traffic signal optimization. Relative performance of the two strategies is quantified with computational results using the total network-wide delay as the measure of effectivness.

  1. Clouds and the Earth's Radiant Energy System (CERES) Algorithm Theoretical Basis Document. Volume 3; Cloud Analyses and Determination of Improved Top of Atmosphere Fluxes (Subsystem 4)

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 3 details the advanced CERES methods for performing scene identification and inverting each CERES scanner radiance to a top-of-the-atmosphere (TOA) flux. CERES determines cloud fraction, height, phase, effective particle size, layering, and thickness from high-resolution, multispectral imager data. CERES derives cloud properties for each pixel of the Tropical Rainfall Measuring Mission (TRMM) visible and infrared scanner and the Earth Observing System (EOS) moderate-resolution imaging spectroradiometer. Cloud properties for each imager pixel are convolved with the CERES footprint point spread function to produce average cloud properties for each CERES scanner radiance. The mean cloud properties are used to determine an angular distribution model (ADM) to convert each CERES radiance to a TOA flux. The TOA fluxes are used in simple parameterization to derive surface radiative fluxes. This state-of-the-art cloud-radiation product will be used to substantially improve our understanding of the complex relationship between clouds and the radiation budget of the Earth-atmosphere system.

  2. Algorithms for improved 3-D reconstruction of live mammalian embryo vasculature from optical coherence tomography data

    PubMed Central

    Kulkarni, Prathamesh M.; Rey-Villamizar, Nicolas; Merouane, Amine; Sudheendran, Narendran; Wang, Shang; Garcia, Monica; Larina, Irina V.; Roysam, Badrinath

    2015-01-01

    Background Robust reconstructions of the three-dimensional network of blood vessels in developing embryos imaged by optical coherence tomography (OCT) are needed for quantifying the longitudinal development of vascular networks in live mammalian embryos, in support of developmental cardiovascular research. Past computational methods [such as speckle variance (SV)] have demonstrated the feasibility of vascular reconstruction, but multiple challenges remain including: the presence of vessel structures at multiple spatial scales, thin blood vessels with weak flow, and artifacts resulting from bulk tissue motion (BTM). Methods In order to overcome these challenges, this paper introduces a robust and scalable reconstruction algorithm based on a combination of anomaly detection algorithms and a parametric dictionary based sparse representation of blood vessels from structural OCT data. Results Validation results using confocal data as the baseline demonstrate that the proposed method enables the detection of vessel segments that are either partially missed or weakly reconstructed using the SV method. Finally, quantitative measurements of vessel reconstruction quality indicate an overall higher quality of vessel reconstruction with the proposed method. Conclusions Results suggest that sparsity-integrated speckle anomaly detection (SSAD) is potentially a valuable tool for performing accurate quantification of the progression of vascular development in the mammalian embryonic yolk sac as imaged using OCT. PMID:25694962

  3. An improved YEF-DCT based compression algorithm for video capsule endoscopy.

    PubMed

    Mostafa, Atahar; Khan, Tareq; Wahid, Khan

    2014-01-01

    Video capsule endoscopy is a non-invasive technique to receive images of intestine for medical diagnostics. The main design challenges of endoscopy capsule are accruing and transmitting acceptable quality images by utilizing as less hardware and battery power as possible. In order to save wireless transmission power and bandwidth, an efficient image compression algorithm needs to be implemented inside the endoscopy electronic capsule. In this paper, an integer discrete-cosine-transform (DCT) based algorithm is presented that works on a low-complexity color-space specially designed for wireless capsule endoscopy application. First of all, thousands of human endoscopic images and video frames have been analyzed to identify special intestinal features present in those frames. Then a color space, referred as YEF, is used. The YEF converter is lossless and takes only a few adders and shift operation to implement. A low-cost quantization scheme with variable chroma sub-sampling options is also implemented to achieve higher compression. Comparing with the existing works, the proposed transform coding based compressor performs strongly with an average compression ratio of 85% and a high image quality index, peak-signal-to-noise ratio (PSNR) of 52 dB.

  4. Improved random-starting method for the EM algorithm for finite mixtures of regressions.

    PubMed

    Schepers, Jan

    2015-03-01

    Two methods for generating random starting values for the expectation maximization (EM) algorithm are compared in terms of yielding maximum likelihood parameter estimates in finite mixtures of regressions. One of these methods is ubiquitous in applications of finite mixture regression, whereas the other method is an alternative that appears not to have been used so far. The two methods are compared in two simulation studies and on an illustrative data set. The results show that the alternative method yields solutions with likelihood values at least as high as, and often higher than, those returned by the standard method. Moreover, analyses of the illustrative data set show that the results obtained by the two methods may differ considerably with regard to some of the substantive conclusions. The results reported in this article indicate that in applications of finite mixture regression, consideration should be given to the type of mechanism chosen to generate random starting values for the EM algorithm. In order to facilitate the use of the proposed alternative method, an R function implementing the approach is provided in the Appendix of the article.

  5. On improving the algorithm efficiency in the particle-particle force calculations

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2016-09-01

    The problem of calculating inter-particle forces in the particle-particle (PP) simulation models takes an important place in scientific computing. Such simulation models are used in diverse scientific applications arising in astrophysics, plasma physics, particle accelerators, etc., where the long-range forces are considered. The inverse-square laws such as Coulomb's law of electrostatic forces and Newton's law of universal gravitation are the examples of laws pertaining to the long-range forces. The standard naïve PP method outlined, for example, by Hockney and Eastwood [1] is straightforward, processing all pairs of particles in a double nested loop. The PP algorithm provides the best accuracy of all possible methods, but its computational complexity is O (Np2), where Np is a total number of particles involved. Too low efficiency of the PP algorithm seems to be the challenging issue in some cases where the high accuracy is required. An example can be taken from the charged particle beam dynamics where, under computing the own space charge of the beam, so-called macro-particles are used (see e.g., Humphries Jr. [2], Kozynchenko and Svistunov [3]).

  6. Battery available power prediction of hybrid electric vehicle based on improved Dynamic Matrix Control algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Limei; Cheng, Yong; Zou, Ju

    2014-09-01

    The core technology to any hybrid engine vehicle (HEV) is the design of energy management strategy (EMS). To develop a reasonable EMS, it is necessary to monitor the state of capacity, state of health and instantaneous available power of battery packs. A new method that linearizes RC equivalent circuit model and predicts battery available power according to original Dynamic Matrix Control algorithm is proposed. To verify the validity of the new algorithm, a bench test with lithium-ion battery cell and a HEV test with lithium-ion battery packs are carried out. The bench test results indicate that a single RC block equivalent circuit model could be used to describe the dynamic and the steady state characteristics of a battery under testing conditions. However, lacking of long time constant of RC modules, there is a sample deviation in the open-circuit voltage identified and that measured. The HEV testing results show that the battery voltage predicted is in good agreement with that measured, the maximum difference is within 3.7%. Fixing the time constant to a numeric value, satisfactory results can still be achieved. After setting a battery discharge cut-off voltage, the instantaneous available power of the battery can be predicted.

  7. An Unsupervised Opinion Mining Approach for Japanese Weblog Reputation Information Using an Improved SO-PMI Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Guangwei; Araki, Kenji

    In this paper, we propose an improved SO-PMI (Semantic Orientation Using Pointwise Mutual Information) algorithm, for use in Japanese Weblog Opinion Mining. SO-PMI is an unsupervised approach proposed by Turney that has been shown to work well for English. When this algorithm was translated into Japanese naively, most phrases, whether positive or negative in meaning, received a negative SO. For dealing with this slanting phenomenon, we propose three improvements: to expand the reference words to sets of words, to introduce a balancing factor and to detect neutral expressions. In our experiments, the proposed improvements obtained a well-balanced result: both positive and negative accuracy exceeded 62%, when evaluated on 1,200 opinion sentences sampled from three different domains (reviews of Electronic Products, Cars and Travels from Kakaku. com). In a comparative experiment on the same corpus, a supervised approach (SA-Demo) achieved a very similar accuracy to our method. This shows that our proposed approach effectively adapted SO-PMI for Japanese, and it also shows the generality of SO-PMI.

  8. An improved algorithm for the determination of aerosol optical depth in the ultraviolet spectral range from Brewer spectrophotometer observations

    NASA Astrophysics Data System (ADS)

    Sellitto, P.; di Sarra, A.; Siani, A. M.

    2006-10-01

    Methods to derive aerosol optical depth in the UV spectral range from ground-based remote-sensing stations equipped with Brewer spectrophotometers have been recently developed. In this study a modified Langley plot method has been implemented to retrieve aerosol optical depth from direct sun Brewer measurements. The method uses measurements over an extended range of atmospheric airmasses obtained with two different neutral density filters, and accounts for short-term variations of total ozone, derived from the same direct sun observations. The improved algorithm has been applied to data collected with a Brewer mark IV, operational in Rome, Italy, and with a Brewer mark III, operational in Lampedusa, Italy, in the Mediterranean. The efficiency of the improved algorithm has been tested comparing the number of determinations of the extraterrestrial constant against those obtained with a standard Langley plot procedure. The improved method produces a larger number of reliable Langley plots, allowing for a better statistical characterization of the extraterrestrial constant and a better study of its temporal variability. The values of aerosol optical depth calculated in Rome and Lampedusa compare well with simultaneous determinations in the 416-440 nm interval derived from MFRSR and CIMEL measurements.

  9. Improved accuracy of quantification of analytes in human body fluids by near-IR laser Raman spectroscopy with new algorithms

    NASA Astrophysics Data System (ADS)

    Qu, Jianan Y.; Yau, On L.; Yau, SzeFong M.

    1999-07-01

    Near infrared Raman spectroscopy has been successfully used to analyze ethanol and acetaminophen in human urine samples quantitatively. The new algorithms incorporating the intrinsic spectrum of the analyte of interest into the multivariate calibration were examined to improve the accuracy of the predicted concentrations. Comparing with commonly used partial least squares calibration, it was found that the methods using the intrinsic spectrum of the analyte of interest always achieved much higher accuracy, particularly when the interference from other undesired chemicals in the samples are severe.

  10. Hybrid de-noising approach for fiber optic gyroscopes combining improved empirical mode decomposition and forward linear prediction algorithms.

    PubMed

    Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun

    2016-03-01

    A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration. PMID:27036770

  11. Mesh Smoothing Algorithm Applied to a Finite Element Model of the Brain for Improved Brain-Skull Interface.

    PubMed

    Kelley, Mireille E; Miller, Logan E; Urban, Jillian E; Stitzel, Joel D

    2015-01-01

    The brain-skull interface plays an important role in the strain and pressure response of the brain due to impact. In this study, a finite element (FE) model was developed from a brain atlas, representing an adult brain, by converting each 1mm isotropic voxel into a single element of the same size using a custom code developed in MATLAB. This model includes the brain (combined cerebrum and cerebellum), cerebrospinal fluid (CSF), ventricles, and a rigid skull. A voxel-based approach to develop a FE model causes the outer surface of each part to be stair-stepped, which may affect the stress and strain measurements at interfaces between parts. To improve the interaction between the skull, CSF, and brain surfaces, a previously developed mesh smoothing algorithm based on a Laplacian non-shrinking smoothing algorithm was applied to the FE model. This algorithm not only applies smoothing to the surface of the model, but also to the interfaces between the brain, CSF, and skull, while preserving volume and element quality. Warpage, jacobian, aspect ratio, and skew were evaluated and reveal that >99% of the elements retain good element quality. Future work includes implementation of contact definitions to accurately represent the brain-skull interface and to ultimately better understand and predict head injury. PMID:25996716

  12. Forward-Masked Frequency Selectivity Improvements in Simulated and Actual Cochlear Implant Users Using a Preprocessing Algorithm

    PubMed Central

    Jürgens, Tim

    2016-01-01

    Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users’ speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception. PMID:27604785

  13. Forward-Masked Frequency Selectivity Improvements in Simulated and Actual Cochlear Implant Users Using a Preprocessing Algorithm.

    PubMed

    Langner, Florian; Jürgens, Tim

    2016-01-01

    Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users' speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception. PMID:27604785

  14. Forward-Masked Frequency Selectivity Improvements in Simulated and Actual Cochlear Implant Users Using a Preprocessing Algorithm.

    PubMed

    Langner, Florian; Jürgens, Tim

    2016-09-07

    Frequency selectivity can be quantified using masking paradigms, such as psychophysical tuning curves (PTCs). Normal-hearing (NH) listeners show sharp PTCs that are level- and frequency-dependent, whereas frequency selectivity is strongly reduced in cochlear implant (CI) users. This study aims at (a) assessing individual shapes of PTCs in CI users, (b) comparing these shapes to those of simulated CI listeners (NH listeners hearing through a CI simulation), and (c) increasing the sharpness of PTCs using a biologically inspired dynamic compression algorithm, BioAid, which has been shown to sharpen the PTC shape in hearing-impaired listeners. A three-alternative-forced-choice forward-masking technique was used to assess PTCs in 8 CI users (with their own speech processor) and 11 NH listeners (with and without listening through a vocoder to simulate electric hearing). CI users showed flat PTCs with large interindividual variability in shape, whereas simulated CI listeners had PTCs of the same average flatness, but more homogeneous shapes across listeners. The algorithm BioAid was used to process the stimuli before entering the CI users' speech processor or the vocoder simulation. This algorithm was able to partially restore frequency selectivity in both groups, particularly in seven out of eight CI users, meaning significantly sharper PTCs than in the unprocessed condition. The results indicate that algorithms can improve the large-scale sharpness of frequency selectivity in some CI users. This finding may be useful for the design of sound coding strategies particularly for situations in which high frequency selectivity is desired, such as for music perception.

  15. Improvement of the Analysis of the Peroxy Radicals Using AN Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Just, Gabriel M. P.; Rupper, Patrick; Miller, Terry A.; Meerts, W. Leo

    2009-06-01

    For quite awhile, our laboratory has had interestin the organic peroxy radicals which are relevant to atmospheric chemistry as well as low temperature combustion. We first studied these radicals via room temperature cavity ringdown spectroscopy (CRDS). We continued our investigation of the same radicals using a quasi-Fourier-transform laser source using a supersonic jet expansion in order to obtain partially rotationally resolved spectra which are nearly doppler limited. In order to analyze our spectra we decided to complement our conventional least-square-fit method of simulating spectra by using a evolutionary algorithm (EA) approach which uses both the frequency and the intensity information that are contained in our dense and complicated spectra. This presentation will focus on the CD_3O_2 spectrum to demonstrate the capabilities and the quality of the fits obtained via the EA method and compare it with the traditional least-square-fit method.

  16. Improving Efficiency in SMD Simulations Through a Hybrid Differential Relaxation Algorithm.

    PubMed

    Ramírez, Claudia L; Zeida, Ari; Jara, Gabriel E; Roitberg, Adrián E; Martí, Marcelo A

    2014-10-14

    The fundamental object for studying a (bio)chemical reaction obtained from simulations is the free energy profile, which can be directly related to experimentally determined properties. Although quite accurate hybrid quantum (DFT based)-classical methods are available, achieving statistically accurate and well converged results at a moderate computational cost is still an open challenge. Here, we present and thoroughly test a hybrid differential relaxation algorithm (HyDRA), which allows faster equilibration of the classical environment during the nonequilibrium steering of a (bio)chemical reaction. We show and discuss why (in the context of Jarzynski's Relationship) this method allows obtaining accurate free energy profiles with smaller number of independent trajectories and/or faster pulling speeds, thus reducing the overall computational cost. Moreover, due to the availability and straightforward implementation of the method, we expect that it will foster theoretical studies of key enzymatic processes. PMID:26588154

  17. Novel algorithms for improved pattern recognition using the US FDA Adverse Event Network Analyzer.

    PubMed

    Botsis, Taxiarchis; Scott, John; Goud, Ravi; Toman, Pamela; Sutherland, Andrea; Ball, Robert

    2014-01-01

    The medical review of adverse event reports for medical products requires the processing of "big data" stored in spontaneous reporting systems, such as the US Vaccine Adverse Event Reporting System (VAERS). VAERS data are not well suited to traditional statistical analyses so we developed the FDA Adverse Event Network Analyzer (AENA) and three novel network analysis approaches to extract information from these data. Our new approaches include a weighting scheme based on co-occurring triplets in reports, a visualization layout inspired by the islands algorithm, and a network growth methodology for the detection of outliers. We explored and verified these approaches by analysing the historical signal of Intussusception (IS) after the administration of RotaShield vaccine (RV) in 1999. We believe that our study supports the use of AENA for pattern recognition in medical product safety and other clinical data. PMID:25160375

  18. Improvement of FBG peak wavelength demodulation using digital signal processing algorithms

    NASA Astrophysics Data System (ADS)

    Harasim, Damian; Gulbahar, Yussupova

    2015-09-01

    Spectrum reflected or transmitted by fiber Bragg grating (FBG) in laboratory environment usually has smooth shape with high signal to noise ratio, similar to Gaussian curve. However, in some applications reflected spectrum could included some strong noise, especially where sensing array contains large number of FBGs or while is used broadband, low power source. This paper presents a possibility for extraction fiber Bragg grating peak wavelength from spectra with weak signal to noise radio with most frequently using digital signal processing algorithms. The accuracy of function minimum, centroid and Gaussian fitting methods for peak wavelength detection is compared. The linearity of processing characteristics of extended FBG measured for reference high power and second, low power source is shown and compared.

  19. A community effort to assess and improve drug sensitivity prediction algorithms.

    PubMed

    Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo

    2014-12-01

    Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods.

  20. An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1988-01-01

    An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.

  1. A community effort to assess and improve drug sensitivity prediction algorithms

    PubMed Central

    Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo

    2015-01-01

    Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods. PMID:24880487

  2. Novel algorithms for improved pattern recognition using the US FDA Adverse Event Network Analyzer.

    PubMed

    Botsis, Taxiarchis; Scott, John; Goud, Ravi; Toman, Pamela; Sutherland, Andrea; Ball, Robert

    2014-01-01

    The medical review of adverse event reports for medical products requires the processing of "big data" stored in spontaneous reporting systems, such as the US Vaccine Adverse Event Reporting System (VAERS). VAERS data are not well suited to traditional statistical analyses so we developed the FDA Adverse Event Network Analyzer (AENA) and three novel network analysis approaches to extract information from these data. Our new approaches include a weighting scheme based on co-occurring triplets in reports, a visualization layout inspired by the islands algorithm, and a network growth methodology for the detection of outliers. We explored and verified these approaches by analysing the historical signal of Intussusception (IS) after the administration of RotaShield vaccine (RV) in 1999. We believe that our study supports the use of AENA for pattern recognition in medical product safety and other clinical data.

  3. An Improved Topology-Potential-Based Community Detection Algorithm for Complex Network

    PubMed Central

    Wang, Zhixiao; Zhao, Ya; Chen, Zhaotong; Niu, Qiang

    2014-01-01

    Topology potential theory is a new community detection theory on complex network, which divides a network into communities by spreading outward from each local maximum potential node. At present, almost all topology-potential-based community detection methods ignore node difference and assume that all nodes have the same mass. This hypothesis leads to inaccuracy of topology potential calculation and then decreases the precision of community detection. Inspired by the idea of PageRank algorithm, this paper puts forward a novel mass calculation method for complex network nodes. A node's mass obtained by our method can effectively reflect its importance and influence in complex network. The more important the node is, the bigger its mass is. Simulation experiment results showed that, after taking node mass into consideration, the topology potential of node is more accurate, the distribution of topology potential is more reasonable, and the results of community detection are more precise. PMID:24600319

  4. SOM Neural Network Fault Diagnosis Method of Polymerization Kettle Equipment Optimized by Improved PSO Algorithm

    PubMed Central

    Wang, Jie-sheng; Li, Shu-xia; Gao, Jie

    2014-01-01

    For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective. PMID:25152929

  5. SOM neural network fault diagnosis method of polymerization kettle equipment optimized by improved PSO algorithm.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Gao, Jie

    2014-01-01

    For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.

  6. Clinical algorithm for improved prediction of ambulation and patient stratification after incomplete spinal cord injury.

    PubMed

    Zörner, Björn; Blanckenhorn, Wolf U; Dietz, Volker; Curt, Armin

    2010-01-01

    The extent of ambulatory recovery after motor incomplete spinal cord injury (miSCI) differs considerably amongst affected persons. This makes individual outcome prediction difficult and leads to increased within-group variation in clinical trials. The aims of this study on subjects with miSCI were: (1) to rank the strongest single predictors and predictor combinations of later walking capacity; (2) to develop a reliable algorithm for clinical prediction; and (3) to identify subgroups with only limited recovery of walking function. Correlation and logistic regression analyses were performed on a dataset of 90 subjects with tetra- or paraparesis, recruited in a prospective European multicenter study. Eleven measures obtained in the subacute injury period, including clinical examination, tibial somatosensory evoked potentials (tSSEP), and demographic factors, were related to ambulatory outcome (WISCI II, 6minWT) 6 months after injury. The lower extremity motor score (LEMS) alone and in combination was identified as most predictive for later walking capacity in miSCI. Ambulatory outcome of subjects with tetraparesis was correctly predicted for 92% (WISCI II) or 100% (6minWT) of the cases when LEMS was combined with either tSSEP or the ASIA Impairment Scale, respectively. For individuals with paraparesis, prediction was less distinct, mainly due to low prediction rates for individuals with poor walking outcome. A clinical algorithm was generated that allowed for the identification of a subgroup composed of individuals with tetraparesis and poor ambulatory recovery. These data provide evidence that a combination of predictors enables a reliable prediction of walking function and early patient stratification for clinical trials in miSCI.

  7. Substantial improvement of nanotube processability by freeze-drying.

    PubMed

    Maugey, M; Neri, W; Zakri, C; Derré, A; Pénicaud, A; Noé, L; Chorro, M; Launois, P; Monthioux, M; Poulin, P

    2007-08-01

    As-produced carbon nanotubes often contain a fraction of impurities such as metal catalysts, inorganic supports, and carbon by-products. These impurities can be partially removed by using acidic dissolution. The resulting nanotube materials have to be dried to form a powder. The processability of nanotubes subjected to regular (thermal vaporisation) drying is particularly difficult because capillary forces pack and stick the nanotubes irreversibly, which limits their dispersability in polymeric matrices or solvents. We show that this dramatic limitation can be circumvented by using freeze-drying instead of regular-drying during nanotube purification process. In this case, the nanotubes are trapped in frozen water which is then sublimated. As a result the final powder is significantly less compact and, more important, the nanotubes can be easily dispersed with no apparent aggregates, thereby greatly enhancing their processability, e.g., they can be used to make homogeneous composites and fibers. Results from coagulation spinning from water-based dispersions of regularly-dried and freeze-dried nanotubes are compared. We also show that freeze-dried materials, in contrast to regularly-dried materials, can be dissolved in organic polar solvents using alkali-doped nanotubes. High resolution TEM and XRD analysis demonstrate that the nanotube structure and quality are not affected at the nanoscale by freeze-drying treatments. PMID:17685277

  8. Three-Dimensional Path Planning and Guidance of Leg Vascular Based on Improved Ant Colony Algorithm in Augmented Reality.

    PubMed

    Gao, Ming-ke; Chen, Yi-min; Liu, Quan; Huang, Chen; Li, Ze-yu; Zhang, Dian-hua

    2015-11-01

    Preoperative path planning plays a critical role in vascular access surgery. Vascular access surgery has superior difficulties and requires long training periods as well as precise operation. Yet doctors are on different leves, thus bulky size of blood vessels is usually chosen to undergo surgery and other possible optimal path is not considered. Moreover, patients and surgeons will suffer from X-ray radiation during the surgical procedure. The study proposed an improved ant colony algorithm to plan a vascular optimal three-dimensional path with overall consideration of factors such as catheter diameter, vascular length, diameter as well as the curvature and torsion. To protect the doctor and patient from exposing to X-ray long-term, the paper adopted augmented reality technology to register the reconstructed vascular model and physical model meanwhile, locate catheter by the electromagnetic tracking system and used Head Mounted Display to show the planning path in real time and monitor catheter push procedure. The experiment manifests reasonableness of preoperative path planning and proves the reliability of the algorithm. The augmented reality experiment real time and accurately displays the vascular phantom model, planning path and the catheter trajectory and proves the feasibility of this method. The paper presented a useful and feasible surgical scheme which was based on the improved ant colony algorithm to plan vascular three-dimensional path in augmented reality. The study possessed practical guiding significance in preoperative path planning, intraoperative catheter guiding and surgical training, which provided a theoretical method of path planning for vascular access surgery. It was a safe and reliable path planning approach and possessed practical reference value.

  9. Improved Temperature Sounding and Quality Control Methodology Using AIRS/AMSU Data: The AIRS Science Team Version 5 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John M.; Iredell, Lena; Keita, Fricky

    2009-01-01

    This paper describes the AIRS Science Team Version 5 retrieval algorithm in terms of its three most significant improvements over the methodology used in the AIRS Science Team Version 4 retrieval algorithm. Improved physics in Version 5 allows for use of AIRS clear column radiances in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of clear column radiances .R(sub i) for all channels. This new approach allows for the generation of more accurate values of .R(sub i) and T(p) under most cloud conditions. Secondly, Version 5 contains a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 also contains for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology, referred to as AIRS Version 5 AO, was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Results are shown comparing the relative performance of the AIRS Version 4, Version 5, and Version 5 AO for the single day, January 25, 2003. The Goddard DISC is now generating and distributing products derived using the AIRS Science Team Version 5 retrieval algorithm. This paper also described the Quality Control flags contained in the DISC AIRS/AMSU retrieval products and their intended use for scientific research purposes.

  10. An Improved Cloud Classification Algorithm for China's FY-2C Multi-Channel Images Using Artificial Neural Network.

    PubMed

    Liu, Yu; Xia, Jun; Shi, Chun-Xiang; Hong, Yang

    2009-01-01

    The crowning objective of this research was to identify a better cloud classification method to upgrade the current window-based clustering algorithm used operationally for China's first operational geostationary meteorological satellite FengYun-2C (FY-2C) data. First, the capabilities of six widely-used Artificial Neural Network (ANN) methods are analyzed, together with the comparison of two other methods: Principal Component Analysis (PCA) and a Support Vector Machine (SVM), using 2864 cloud samples manually collected by meteorologists in June, July, and August in 2007 from three FY-2C channel (IR1, 10.3-11.3 μm; IR2, 11.5-12.5 μm and WV 6.3-7.6 μm) imagery. The result shows that: (1) ANN approaches, in general, outperformed the PCA and the SVM given sufficient training samples and (2) among the six ANN networks, higher cloud classification accuracy was obtained with the Self-Organizing Map (SOM) and Probabilistic Neural Network (PNN). Second, to compare the ANN methods to the present FY-2C operational algorithm, this study implemented SOM, one of the best ANN network identified from this study, as an automated cloud classification system for the FY-2C multi-channel data. It shows that SOM method has improved the results greatly not only in pixel-level accuracy but also in cloud patch-level classification by more accurately identifying cloud types such as cumulonimbus, cirrus and clouds in high latitude. Findings of this study suggest that the ANN-based classifiers, in particular the SOM, can be potentially used as an improved Automated Cloud Classification Algorithm to upgrade the current window-based clustering method for the FY-2C operational products.

  11. An Improved Cloud Classification Algorithm for China’s FY-2C Multi-Channel Images Using Artificial Neural Network

    PubMed Central

    Liu, Yu; Xia, Jun; Shi, Chun-Xiang; Hong, Yang

    2009-01-01

    The crowning objective of this research was to identify a better cloud classification method to upgrade the current window-based clustering algorithm used operationally for China’s first operational geostationary meteorological satellite FengYun-2C (FY-2C) data. First, the capabilities of six widely-used Artificial Neural Network (ANN) methods are analyzed, together with the comparison of two other methods: Principal Component Analysis (PCA) and a Support Vector Machine (SVM), using 2864 cloud samples manually collected by meteorologists in June, July, and August in 2007 from three FY-2C channel (IR1, 10.3–11.3 μm; IR2, 11.5–12.5 μm and WV 6.3–7.6 μm) imagery. The result shows that: (1) ANN approaches, in general, outperformed the PCA and the SVM given sufficient training samples and (2) among the six ANN networks, higher cloud classification accuracy was obtained with the Self-Organizing Map (SOM) and Probabilistic Neural Network (PNN). Second, to compare the ANN methods to the present FY-2C operational algorithm, this study implemented SOM, one of the best ANN network identified from this study, as an automated cloud classification system for the FY-2C multi-channel data. It shows that SOM method has improved the results greatly not only in pixel-level accuracy but also in cloud patch-level classification by more accurately identifying cloud types such as cumulonimbus, cirrus and clouds in high latitude. Findings of this study suggest that the ANN-based classifiers, in particular the SOM, can be potentially used as an improved Automated Cloud Classification Algorithm to upgrade the current window-based clustering method for the FY-2C operational products. PMID:22346714

  12. Pump scheme for gain-flattened Raman fiber amplifiers using improved particle swarm optimization and modified shooting algorithm.

    PubMed

    Jiang, Hai-ming; Xie, Kang; Wang, Ya-fei

    2010-05-24

    An effective pump scheme for the design of broadband and flat gain spectrum Raman fiber amplifiers is proposed. This novel approach uses a new shooting algorithm based on a modified Newton-Raphson method and a contraction factor to solve the two point boundary problems of Raman coupled equations more stably and efficiently. In combination with an improved particle swarm optimization method, which improves the efficiency and convergence rate by introducing a new parameter called velocity acceptability probability, this scheme optimizes the wavelengths and power levels for the pumps quickly and accurately. Several broadband Raman fiber amplifiers in C+L band with optimized pump parameters are designed. An amplifier of 4 pumps is designed to deliver an average on-off gain of 13.3 dB for a bandwidth of 80 nm, with about +/-0.5 dB in band maximum gain ripples.

  13. A partitioned shift-without-invert algorithm to improve parallel eigensolution efficiency in real-space electronic transport

    NASA Astrophysics Data System (ADS)

    Feldman, Baruch; Zhou, Yunkai

    2016-10-01

    We present an eigenspectrum partitioning scheme without inversion for the recently described real-space electronic transport code, TRANSEC. The primary advantage of TRANSEC is its highly parallel algorithm, which enables studying conductance in large systems. The present scheme adds a new source of parallelization, significantly enhancing TRANSEC's parallel scalability, especially for systems with many electrons. In principle, partitioning could enable super-linear parallel speedup, as we demonstrate in calculations within TRANSEC. In practical cases, we report better than five-fold improvement in CPU time and similar improvements in wall time, compared to previously-published large calculations. Importantly, the suggested scheme is relatively simple to implement. It can be useful for general large Hermitian or weakly non-Hermitian eigenvalue problems, whenever relatively accurate inversion via direct or iterative linear solvers is impractical.

  14. Development of an algorithm to improve the accuracy of dose delivery in Gamma Knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Cernica, George Dumitru

    2007-12-01

    Gamma Knife stereotactic radiosurgery has demonstrated decades of successful treatments. Despite its high spatial accuracy, the Gamma Knife's planning software, GammaPlan, uses a simple exponential as the TPR curve for all four collimator sizes, and a skull scaling device to acquire ruler measurements to interpolate a threedimensional spline to model the patient's skull. The consequences of these approximations have not been previously investigated. The true TPR curves of the four collimators were measured by blocking 200 of the 201 sources with steel plugs. Additional attenuation was provided through the use of a 16 cm tungsten sphere, designed to enable beamlet measurements along one axis. TPR, PDD, and beamlet profiles were obtained using both an ion chamber and GafChromic EBT film for all collimators. Additionally, an in-house planning algorithm able to calculate the contour of the skull directly from an image set and implement the measured beamlet data in shot time calculations was developed. Clinical and theoretical Gamma Knife cases were imported into our algorithm. The TPR curves showed small deviations from a simple exponential curve, with average discrepancies under 1%, but with a maximum discrepancy of 2% found for the 18 mm collimator beamlet at shallow depths. The consequences on the PDD of the of the beamlets were slight, with a maximum of 1.6% found with the 18 mm collimator beamlet. Beamlet profiles of the 4 mm, 8 mm, and 14 mm showed some underestimates of the off-axis ratio near the shoulders (up to 10%). The toes of the profiles were underestimated for all collimators, with differences up to 7%. Shot times were affected by up to 1.6% due to TPR differences, but clinical cases showed deviations by no more than 0.5%. The beamlet profiles affected the dose calculations more significantly, with shot time calculations differing by as much as 0.8%. The skull scaling affected the shot time calculations the most significantly, with differences of up to 5

  15. Improving pollutant source characterization by better estimating wind direction with a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Allen, Christopher T.; Young, George S.; Haupt, Sue Ellen

    In homeland security applications, it is often necessary to characterize the source location and strength of a potentially harmful contaminant. Correct source characterization requires accurate meteorological data such as wind direction. Unfortunately, available meteorological data is often inaccurate or unrepresentative, having insufficient spatial and temporal resolution for precise modeling of pollutant dispersion. To address this issue, a method is presented that simultaneously determines the surface wind direction and the pollutant source characteristics. This method compares monitored receptor data to pollutant dispersion model output and uses a genetic algorithm (GA) to find the combination of source location, source strength, and surface wind direction that best matches the dispersion model output to the receptor data. A GA optimizes variables using principles from genetics and evolution. The approach is validated with an identical twin experiment using synthetic receptor data and a Gaussian plume equation as the dispersion model. Given sufficient receptor data, the GA is able to reproduce the wind direction, source location, and source strength. Additional runs incorporating white noise into the receptor data to simulate real-world variability demonstrate that the GA is still capable of computing the correct solution, as long as the magnitude of the noise does not exceed that of the receptor data.

  16. A Practical Algorithm for Improving Localization and Quantification of Left Ventricular Scar

    PubMed Central

    Zenger, Brian; Cates, Joshua; Morris, Alan; Kholmovski, Eugene; Au, Alexander; Ranjan, Ravi; Akoum, Nazem; McGann, Chris; Wilson, Brent; Marrouche, Nassir; Han, Frederick T.; MacLeod, Rob S.

    2015-01-01

    Current approaches to classification of left ventricular scar rely on manual segmentation of myocardial borders and manual classification of scar tissue. In this paper, we propose an novel, semi-automatic approach to segment the left ventricular wall and classify scar tissue using a combination of modern image processing techniques. We obtained high-resolution magnetic resonance angiograms (MRA) and late-gadolinium enhanced magnetic resonance imaging (LGE-MRI) in 14 patients who had ventricular scar from a prior myocardial infarction. We applied (1) a level set-based segmentation approach using a combination of the MRA and LGE-MRI to segment the myocardium and then (2) an automated signal intensity algorithm (Otsu thresholding) to identify ventricular scar tissue. We compared results from both steps to those of expert observers. The LVgeometry using the semi-automated segmentation method had a mean overlap of 94% with the manual segmentations. The scar volumes obtained with the Otsu method correlated with the expert observer scar volumes (Dice comparison coefficient of 0.85± 0.11). This proof of concept segmentation pipeline provides a more objective method for identifying scar in the left ventricle than manual approaches. PMID:26448961

  17. An improved algorithm for the modeling of vapor flow in heat pipes

    NASA Technical Reports Server (NTRS)

    Tower, Leonard K.; Hainley, Donald C.

    1989-01-01

    A heat pipe vapor flow algorithm suitable for use in codes on microcomputers is presented. The incompressible heat pipe vapor flow studies of Busse are extended to incorporate compressibility effects. The Busse velocity profile factor is treated as a function of temperature and pressure. The assumption of a uniform saturated vapor temperature determined by the local pressure at each cross section of the pipe is not made. Instead, a mean vapor temperature, defined by an energy integral, is determined in the course of the solution in addition to the pressure, saturation temperature at the wall, and the Busse velocity profile factor. For alkali metal working fluids, local species equilibrium is assumed. Temperature and pressure profiles are presented for several cases involving sodium heat pipes. An example for a heat pipe with an adiabatic section and two evaporators in sequence illustrates the ability to handle axially varying heat input. A sonic limit plot for a short evaporator falls between curves for the Busse and Levy inviscid sonic limits.

  18. Iterative restoration algorithms for improving the range accuracy in imaging laser radar

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Yan, Huimin; Zhang, Xiuda; Shangguan, Wangpin; Su, Heng

    2010-11-01

    Scannerless imaging laser radar has been a focus of research in these years for its fast imaging speed and high resolution. We introduced a three-dimensional imaging laser radar using intensified CCD as the receiver with constant gain and line modulated gain. The distance map of a scene is obtained from two intensity images. According to the transmission characteristics of the imaging system, a model of degeneration of the gray images is established and the range accuracy of imaging laser radar based on this model is analyzed. The results show that the range accuracy is related with the reflectivity, the actual distance and some other factors on the fast-distance-varying region, while it is mainly concerned with shot noise for the flat area. On the basis of the cause of measurement error and the distribution characteristics of noise, a method which uses iterative restoration algorithms on obtained intensity images is presented, Simulation is carried out and the results show that root mean square error of distance map obtained with this method is decreased by 50%, compared with the distance map obtained by measurement. Finally the restoration results of radar images are demonstrated to verify the effectiveness of this method.

  19. Combination of digital signal processing methods towards an improved analysis algorithm for structural health monitoring.

    NASA Astrophysics Data System (ADS)

    Pentaris, Fragkiskos P.; Makris, John P.

    2013-04-01

    In Structural Health Monitoring (SHM) is of great importance to reveal valuable information from the recorded SHM data that could be used to predict or indicate structural fault or damage in a building. In this work a combination of digital signal processing methods, namely FFT along with Wavelet Transform is applied, together with a proposed algorithm to study frequency dispersion, in order to depict non-linear characteristics of SHM data collected in two university buildings under natural or anthropogenic excitation. The selected buildings are of great importance from civil protection point of view, as there are the premises of a public higher education institute, undergoing high use, stress, visit from academic staff and students. The SHM data are collected from two neighboring buildings that have different age (4 and 18 years old respectively). Proposed digital signal processing methods are applied to the data, presenting a comparison of the structural behavior of both buildings in response to seismic activity, weather conditions and man-made activity. Acknowledgments This work was supported in part by the Archimedes III Program of the Ministry of Education of Greece, through the Operational Program "Educational and Lifelong Learning", in the framework of the project entitled «Interdisciplinary Multi-Scale Research of Earthquake Physics and Seismotectonics at the front of the Hellenic Arc (IMPACT-ARC) » and is co-financed by the European Union (European Social Fund) and Greek National Fund.

  20. Improved perception of music with a harmonic based algorithm for cochlear implants.

    PubMed

    Li, Xing; Nie, Kaibao; Imennov, Nikita S; Rubinstein, Jay T; Atlas, Les E

    2013-07-01

    The lack of fine structure information in conventional cochlear implant (CI) encoding strategies presumably contributes to the generally poor music perception with CIs. To improve CI users' music perception, a harmonic-single-sideband-encoder (HSSE) strategy was developed , which explicitly tracks the harmonics of a single musical source and transforms them into modulators conveying both amplitude and temporal fine structure cues to electrodes. To investigate its effectiveness, vocoder simulations of HSSE and the conventional continuous-interleaved-sampling (CIS) strategy were implemented. Using these vocoders, five normal-hearing subjects' melody and timbre recognition performance were evaluated: a significant benefit of HSSE to both melody (p < 0.002) and timbre (p < 0.026) recognition was found. Additionally, HSSE was acutely tested in eight CI subjects. On timbre recognition, a significant advantage of HSSE over the subjects' clinical strategy was demonstrated: the largest improvement was 35% and the mean 17% (p < 0.013). On melody recognition, two subjects showed 20% improvement with HSSE; however, the mean improvement of 7% across subjects was not significant (p > 0.090). To quantify the temporal cues delivered to the auditory nerve, the neural spike patterns evoked by HSSE and CIS for one melody stimulus were simulated using an auditory nerve model. Quantitative analysis demonstrated that HSSE can convey temporal pitch cues better than CIS. The results suggest that HSSE is a promising strategy to enhance music perception with CIs. PMID:23613083

  1. Improved perception of music with a harmonic based algorithm for cochlear implants.

    PubMed

    Li, Xing; Nie, Kaibao; Imennov, Nikita S; Rubinstein, Jay T; Atlas, Les E

    2013-07-01

    The lack of fine structure information in conventional cochlear implant (CI) encoding strategies presumably contributes to the generally poor music perception with CIs. To improve CI users' music perception, a harmonic-single-sideband-encoder (HSSE) strategy was developed , which explicitly tracks the harmonics of a single musical source and transforms them into modulators conveying both amplitude and temporal fine structure cues to electrodes. To investigate its effectiveness, vocoder simulations of HSSE and the conventional continuous-interleaved-sampling (CIS) strategy were implemented. Using these vocoders, five normal-hearing subjects' melody and timbre recognition performance were evaluated: a significant benefit of HSSE to both melody (p < 0.002) and timbre (p < 0.026) recognition was found. Additionally, HSSE was acutely tested in eight CI subjects. On timbre recognition, a significant advantage of HSSE over the subjects' clinical strategy was demonstrated: the largest improvement was 35% and the mean 17% (p < 0.013). On melody recognition, two subjects showed 20% improvement with HSSE; however, the mean improvement of 7% across subjects was not significant (p > 0.090). To quantify the temporal cues delivered to the auditory nerve, the neural spike patterns evoked by HSSE and CIS for one melody stimulus were simulated using an auditory nerve model. Quantitative analysis demonstrated that HSSE can convey temporal pitch cues better than CIS. The results suggest that HSSE is a promising strategy to enhance music perception with CIs.

  2. A technique to improve the accuracy of Earth orientation prediction algorithms based on least squares extrapolation

    NASA Astrophysics Data System (ADS)

    Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.

    2013-10-01

    We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.

  3. An improved MLC segmentation algorithm and software for step-and-shoot IMRT delivery without tongue-and-groove error

    SciTech Connect

    Luan Shuang; Wang Chao; Chen, Danny Z.; Hu, Xiaobo S.; Naqvi, Shahid A.; Wu Xingen; Yu, Cedric X.

    2006-05-15

    We present an improved multileaf collimator (MLC) segmentation algorithm, denoted by SLS{sub NOTG} (static leaf sequencing with no tongue-and-groove error), for step-and-shoot intensity-modulated radiation therapy (IMRT) delivery. SLS{sub NOTG} is an improvement over the MLC segmentation algorithm called SLS that was developed by Luan et al. [Med. Phys. 31(4), 695-707 (2004)], which did not consider tongue-and-groove error corrections. The aims of SLS{sub NOTG} are (1) shortening the treatment times of IMRT plans by minimizing their numbers of segments and (2) minimizing the tongue-and-groove errors of the computed IMRT plans. The input to SLS{sub NOTG} is intensity maps (IMs) produced by current planning systems, and its output is (modified) optimized leaf sequences without tongue-and-groove error. Like the previous SLS algorithm [Luan et al., Med. Phys. 31(4), 695-707 (2004)], SLS{sub NOTG} is also based on graph algorithmic techniques in computer science. It models the MLC segmentation problem as a weighted minimum-cost path problem, where the weight of the path is the number of segments and the cost of the path is the amount of tongue-and-groove error. Our comparisons of SLS{sub NOTG} with CORVUS indicated that for the same intensity maps, the numbers of segments computed by SLS{sub NOTG} are up to 50% less than those by CORVUS 5.0 on the Elekta LINAC system. Our clinical verifications have shown that the dose distributions of the SLS{sub NOTG} plans do not have tongue-and-groove error and match those of the corresponding CORVUS plans, thus confirming the correctness of SLS{sub NOTG}. Comparing with existing segmentation methods, SLS{sub NOTG} also has two additional advantages: (1) SLS{sub NOTG} can compute leaf sequences whose tongue-and-groove error is minimized subject to a constraint on the maximum allowed number of segments, which may be desirable in clinical situations where a treatment with the complete correction of tongue-and-groove error takes too

  4. Improved aerosol retrieval algorithm using Landsat images and its application for PM10 monitoring over urban areas

    NASA Astrophysics Data System (ADS)

    Luo, Nana; Wong, Man Sing; Zhao, Wenji; Yan, Xing; Xiao, Fei

    2015-02-01

    Aerosol retrieval using MODerate resolution Imaging Spectroradiometer (MODIS) has been well researched over the past decade. However, the application is limited to global- and regional-scale studies, which may not be applicable for urban areas due to its low spatial resolution. To overcome the limitation, this paper proposed an improved aerosol retrieval algorithm for Landsat images (ImAero-Landsat) at spatial resolution of 30 m. This ImAero-Landsat algorithm has been improved in the following two aspects: (i) it does not require a comprehensive look up table and thus it is more efficient in AOT retrieval; and (ii) it can be operated in both bright and dark surfaces. The derived aerosol optical thickness (AOT) images were validated with AErosol RObotic NETwork (AERONET) measurements as well as MODIS MOD04 AOT products. Small root mean square errors (RMSEs) of 0.11 and 0.14 and mean absolute difference (MAD) of 0.07 and 0.11 between ImAero-Landsat AOT, with MODIS MOD04 and AERONET products were observed. By correlating with ground based PM10 concentrations, the ImAero-Landsat method outperforms (r2 = 0.32) than MOD04 AOT products (r2 = 0.23). In addition, the accuracy of estimating PM10 can be improved to r2 = 0.55 when the derived AOT was integrated with meteorological parameters. The accuracy is similar to the results derived from AERONET AOT (r2 = 0.62). This study offers a simple and accurate method to investigate aerosol optical thickness at detailed city-scale. Environmental authorities may use the derived methods for deriving aerosol distribution maps and pinpointing the sources of pollutants in urban areas.

  5. Multi-objective optimal design of magnetorheological engine mount based on an improved non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong

    2014-03-01

    A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.

  6. Optimization of frequency lowering algorithms for getting the highest speech intelligibility improvement by hearing loss simulation.

    PubMed

    Arıöz, Umut; Günel, Banu

    2015-06-01

    High frequency hearing loss is a growing problem for both children and adults. To overcome this impairment, different frequency lowering methods (FLMs) were tried from 1930s, however no satisfaction was provided up to now. In this study, for getting higher speech intelligibility, eight combinations of FLMs which were designed originally were tried with simulated sounds onto normal hearing subjects. These improvements were calculated by the difference with standard hearing aid method, amplification. High frequency hearing loss was simulated with the combined suprathreshold effects. An offline study was carried out for each subject for determining the significant methods used in modified rhyme test (MRT) (Subjective measure for intelligibility). Significant methods were determined according to their speech intelligibility index (SII) (Objective measure for intelligibility). All different cases were tried under four noisy environments and a noise free environment. Twelve hearing impaired subjects were simulated by hearing loss simulation (HLS). MRT was developed for Turkish language as a first time. As the results of improvements, total 71 cases were statistically significant for twelve subjects. Eighty-three percent success of FLMs was achieved against amplification for being an alternative method of amplification in noisy environments. For four subjects, all significant methods gave higher improvements than amplification. As conclusion, specific method recommendations for different noisy environments were done for each subject for getting more speech intelligibility.

  7. Improvement of an algorithm for recognition of liveness using perspiration in fingerprint devices

    NASA Astrophysics Data System (ADS)

    Parthasaradhi, Sujan T.; Derakhshani, Reza; Hornak, Lawrence A.; Schuckers, Stephanie C.

    2004-08-01

    Previous work in our laboratory and others have demonstrated that spoof fingers made of a variety of materials including silicon, Play-Doh, clay, and gelatin (gummy finger) can be scanned and verified when compared to a live enrolled finger. Liveness, i.e. to determine whether the introduced biometric is coming from a live source, has been suggested as a means to circumvent attacks using spoof fingers. We developed a new liveness method based on perspiration changes in the fingerprint image. Recent results showed approximately 90% classification rate using different classification methods for various technologies including optical, electro-optical, and capacitive DC, a shorter time window and a diverse dataset. This paper focuses on improvement of the live classification rate by using a weight decay method during the training phase in order to improve the generalization and reduce the variance of the neural network based classifier. The dataset included fingerprint images from 33 live subjects, 33 spoofs created with dental impression material and Play-Doh, and fourteen cadaver fingers. 100% live classification was achieved with 81.8 to 100% spoof classification, depending on the device technology. The weight-decay method improves upon past reports by increasing the live and spoof classification rate.

  8. Toward More Substantial Theories of Language Acquisition

    ERIC Educational Resources Information Center

    Jenson, Cinnamon Ann

    2015-01-01

    Cognitive linguists argue that certain sets of knowledge of language are innate. However, critics have argued that the theoretical concept of "innateness" should be eliminated since it is ambiguous and insubstantial. In response, I aim to strengthen theories of language acquisition and identify ways to make them more substantial. I…

  9. 77 FR 39452 - Substantial Business Activities; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-03

    ..., June 12, 2012 (77 FR 34887) regarding whether a foreign corporation has substantial business activities...- 107889-12), which was the subject of FR. Doc. 2012-14238, is corrected as follows: On page 34887, column.... Lyons, (202) 622-3860; and David A. Levine, (202) 622-3860, and regarding the submission of...

  10. 40 CFR 725.94 - Substantiation requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... following questions must be answered: (i) What harmful effects to the company's or institution's competitive...? How substantial would the harmful effects of disclosure be? What is the causal relationship between the disclosure and the harmful effects? (ii) Has the identity of the microorganism been...

  11. 40 CFR 725.94 - Substantiation requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... following questions must be answered: (i) What harmful effects to the company's or institution's competitive...? How substantial would the harmful effects of disclosure be? What is the causal relationship between the disclosure and the harmful effects? (ii) Has the identity of the microorganism been...

  12. CALIOP/CALIPSO: Improvement in the retrieval algorithm and a few applications

    NASA Astrophysics Data System (ADS)

    Kacenelenbogen, M. S.; Vaughan, M.; Redemann, J.; Hoff, R. M.; Rogers, R.; Ferrare, R. A.; Russell, P. B.; Hostetler, C. A.; Hair, J. W.; Holben, B.

    2010-12-01

    The Cloud Aerosol LIdar with Orthogonal Polarization (CALIOP), on board the CALIPSO platform, has measured profiles of total attenuated backscatter coefficient (level 1 products) since June 2006. CALIOP’s level 2 products, such as the aerosol backscatter and extinction coefficient profiles, are retrieved using a complex succession of automated algorithms. One of our goals was to help identify potential shortcomings in the CALIOP version 2 level 2 aerosol extinction product and to illustrate some of the motivation for the changes that were introduced in the next version of CALIOP data (version 3, currently being processed). As a first step, we compared CALIOP version 2-derived AOD with collocated MODerate-resolution Imaging Spectroradiometer (MODIS) AOD retrievals over the Continental United States. The best statistical agreement between those two quantities was found over the Eastern part of the United States with, nonetheless, a weak correlation (R~0.4) and an apparent CALIOP version 2 underestimation (by ~66 %) of MODIS AOD. To help quantify the potential factors contributing to the uncertainty of the CALIOP aerosol extinction retrieval, we then focused on a one-day, multi-instrument, multiplatform comparison study during the CALIPSO and Twilight Zone (CATZ) validation campaign on August 04, 2007. This case study illustrates the following potential reasons for a bias in the version 2 CALIOP AOD: (i) CALIOP’s low signal-to-noise ratio (SNR) leading to the misclassification and/or lack of aerosol layer identification, especially close to the Earth’s surface; (ii) the cloud contamination of CALIOP version 2 aerosol backscatter and extinction profiles; (iii) potentially erroneous assumptions of the backscatter-to-extinction ratio (Sa) used in CALIOP’s extinction retrievals; and (iv) calibration coefficient biases in the CALIOP daytime attenuated backscatter coefficient profiles. We then show the use of the CALIPSO aerosol vertical distribution information in

  13. Development of double-pair double difference earthquake location algorithm for improving earthquake locations

    NASA Astrophysics Data System (ADS)

    Guo, Hao; Zhang, Haijiang

    2016-10-01

    Event-pair double-difference (DD) earthquake location method, as incorporated in hypoDD, has been widely used to improve relative earthquake locations by using event-pair differential arrival times from pairs of events to common stations because some common path anomalies outside the source region can be canceled out due to similar ray paths. Similarly, station-pair differential arrival times from one event to pairs of stations can also be used to improve earthquake locations by canceling out the event origin time and some path anomalies inside the source region. To utilize advantages of both DD location methods, we have developed a new double-pair DD location method to use differential times constructed from pairs of events to pairs of stations to determine higher-precision relative earthquake locations. Compared to the event-pair and station-pair DD location methods, the new method can remove event origin times and station correction terms from the inversion system and cancel out path anomalies both outside and inside the source region at the same time. The new method is tested on earthquakes around the San Andreas Fault, California to validate its performance. From earthquake relocations it is demonstrated that the double-pair DD location method is able to better sharpen the images of seismicity with smaller relative location uncertainties compared to the event-pair DD location method and thus to reveal more fine-scale structures. In comparison, among three DD location methods, station-pair DD location method can better improve the absolute earthquake locations. For this reason, we further propose a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time, which is validated by both synthetic and real datasets.

  14. Improving TCP throughput performance on high-speed networks with a receiver-side adaptive acknowledgment algorithm

    NASA Astrophysics Data System (ADS)

    Yeung, Wing-Keung; Chang, Rocky K. C.

    1998-12-01

    A drastic TCP performance degradation was reported when TCP is operated on the ATM networks. This deadlock problem is 'caused' by the high speed provided by the ATM networks. Therefore this deadlock problem is shared by any high-speed networking technologies when TCP is run on them. The problems are caused by the interaction of the sender-side and receiver-side Silly Window Syndrome (SWS) avoidance algorithms because the network's Maximum Segment Size (MSS) is no longer small when compared with the sender and receiver socket buffer sizes. Here we propose a new receiver-side adaptive acknowledgment algorithm (RSA3) to eliminate the deadlock problems while maintaining the SWS avoidance mechanisms. Unlike the current delayed acknowledgment strategy, the RSA3 does not rely on the exact value of MSS an the receiver's buffer size to determine the acknowledgement threshold.Instead the RSA3 periodically probes the sender to estimate the maximum amount of data that can be sent without receiving acknowledgement from the receiver. The acknowledgment threshold is computed as 35 percent of the estimate. In this way, deadlock-free TCP transmission is guaranteed. Simulation studies have shown that the RSA3 even improves the throughput performance in some non-deadlock regions. This is due to a quicker response taken by the RSA3 receiver. We have also evaluated different acknowledgment thresholds. It is found that the case of 35 percent gives the best performance when the sender and receiver buffer sizes are large.

  15. An algorithm for circular test and improved optical configuration by two-dimensional (2D) laser heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Tang, Shanzhi; Yu, Shengrui; Han, Qingfu; Li, Ming; Wang, Zhao

    2016-09-01

    Circular test is an important tactic to assess motion accuracy in many fields especially machine tool and coordinate measuring machine. There are setup errors due to using directly centring of the measuring instrument for both of contact double ball bar and existed non-contact methods. To solve this problem, an algorithm for circular test using function construction based on matrix operation is proposed, which is not only used for the solution of radial deviation (F) but also should be applied to obtain two other evaluation parameters especially circular hysteresis (H). Furthermore, an improved optical configuration with a single laser is presented based on a 2D laser heterodyne interferometer. Compared with the existed non-contact method, it has a more pure homogeneity of the laser sources of 2D displacement sensing for advanced metrology. The algorithm and modeling are both illustrated. And error budget is also achieved. At last, to validate them, test experiments for motion paths are implemented based on a gantry machining center. Contrast test results support the proposal.

  16. Improvements to the OMI Near-uv Aerosol Algorithm Using A-train CALIOP and AIRS Observations

    NASA Technical Reports Server (NTRS)

    Torres, O.; Ahn, C.; Zhong, C.

    2014-01-01

    The height of desert dust and carbonaceous aerosols layers and, to a lesser extent, the difficulty in assessing the predominant size mode of these absorbing aerosol types, are sources of uncertainty in the retrieval of aerosol properties from near UV satellite observations. The availability of independent, near-simultaneous measurements of aerosol layer height, and aerosol-type related parameters derived from observations by other A-train sensors, makes possible the direct use of these parameters as input to the OMI (Ozone Monitoring Instrument) near UV retrieval algorithm. A monthly climatology of aerosol layer height derived from observations by the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) sensor, and real-time AIRS (Atmospheric Infrared Sounder) CO observations are used in an upgraded version of the OMI near UV aerosol algorithm. AIRS CO measurements are used as a reliable tracer of carbonaceous aerosols, which allows the identification of smoke layers in areas and times of the year where the dust-smoke differentiation is difficult in the near-UV. The use of CO measurements also enables the identification of elevated levels of boundary layer pollution undetectable by near UV observations alone. In this paper we discuss the combined use of OMI, CALIOP and AIRS observations for the characterization of aerosol properties, and show a significant improvement in OMI aerosol retrieval capabilities.

  17. DNA hybridization detection with 100 zM sensitivity using piezoelectric plate sensors with an improved noise-reduction algorithm.

    PubMed

    Kirimli, Ceyhun E; Shih, Wei-Heng; Shih, Wan Y

    2014-06-01

    We have examined real-time, in situ hybridization detection of target DNA (tDNA) in a buffer solution and in urine using 8 μm-thick lead magnesium niobate-lead titanate (PMN-PT) piezoelectric plate sensors (PEPSs) about 1.1-1.2 mm long and 0.45 mm wide with improved 3-mercaptopropyltrimethoxysilane (MPS) insulation and a new multiple-parabola (>50) resonance peak position fitting algorithm. With probe DNA (pDNA) immobilized on the PEPS surface and by monitoring the first width extension mode (WEM) resonance frequency shift we detected tDNA in real time at concentration as low as 1 × 10(-19) M in urine (100 zM) with a signal to noise ratio (SNR) of 13 without DNA isolation and amplification at room temperature in 30 min. The present multiple-parabola fitting algorithm increased the detection of SNR by about 10 times compared to those obtained using the raw data and by about 5 times compared to those obtained using single parabola fitting. The detection was validated by in situ follow-up detection and subsequent visualization of fluorescent reporter microspheres (FRMs) coated with reporter DNA complementary to the tDNA but different from the probe pDNA. PMID:24759937

  18. 2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation

    DOE PAGES

    Warren, Michael S.

    2014-01-01

    We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (2 18 ) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (4096 3 ) particle cosmological simulations, accounting for 4×10 20 floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracymore » and scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.« less

  19. An improved membrane algorithm for solving time-consuming water quality retrieval

    NASA Astrophysics Data System (ADS)

    Zhong, Liang; Luo, Wenfei

    2011-11-01

    Retrieving the parameters in water quality with multispectral data using neural network is increasingly popular, however, the training process with large amount samples and calculation with large-volume data are a time-consuming work. Many emergency pollution events need quick responses for practical use. In this paper, an improved membrane computing strategy is presented. This strategy is a hybrid one combining the framework and evolution rules of P systems with active membranes and neural networks, and it involves a dynamic structure including membrane fusion and division, which helpful to enhance the information communication and beneficial to reduce the computation. Then, a parallel implementation with the training result is discussed. Experiments with Landsat datasets to obtain suspended sediment are carried out to demonstrate the practical capabilities of this introduced strategy.

  20. Improving nonlinear performance of the HEPS baseline design with a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jiao, Yi

    2016-07-01

    A baseline design for the High Energy Photon Source has been proposed, with a natural emittance of 60 pm·rad within a circumference of about 1.3 kilometers. Nevertheless, the nonlinear performance of the design needs further improvements to increase both the dynamic aperture and the momentum acceptance. In this study, genetic optimization of the linear optics is performed, so as to find all the possible solutions with weaker sextupoles and hence weaker nonlinearities, while keeping the emittance at the same level as the baseline design. The solutions obtained enable us to explore the dependence of nonlinear dynamics on the working point. The result indicates that with the same layout, it is feasible to obtain much better nonlinear performance with a delicate tuning of the magnetic field strengths and a wise choice of the working point. Supported by NSFC (11475202, 11405187) and Youth Innovation Promotion Association CAS (2015009)

  1. Improvement of the Koradi parallel algorithm for molecular dynamics and application to the economic organization and optimization of recycling costs of waste electrical and electronic equipment

    NASA Astrophysics Data System (ADS)

    Cabria, I.; Queiruga, D.

    2005-09-01

    A parallel algorithm for molecular dynamics, MD, the Koradi point-centered decomposition algorithm, especially designed for inhomogeneous systems, is improved and applied to the organization and optimization of recycling costs of Waste Electrical and Electronic Equipment, WEEE, and also to systems of atoms. This organization requires the numbers and locations of storage centers and recycling plants of the WEEE that minimize the recycling cost. The Koradi algorithm finds these optimal numbers and locations, dealing very fast with large numbers of data, in contrast with other methods. The changes of the original algorithm (different ways of generating the initial centers and especially the requirement of location convergence) improve its performance for this economic problem and also for MD simulations.

  2. Improved Field Emission Algorithms for Modeling Field Emission Devices Using a Conformal Finite-Difference Time-Domain Particle-in-Cell Method

    NASA Astrophysics Data System (ADS)

    Lin, M. C.; Loverich, J.; Stoltz, P. H.; Nieter, C.

    2013-10-01

    This work introduces a conformal finite difference time domain (CFDTD) particle-in-cell (PIC) method with an improved field emission algorithm to accurately and efficiently study field emission devices. The CFDTD method is based on the Dey-Mittra algorithm or cut-cell algorithm, as implemented in the Vorpal code. For the field emission algorithm, we employ the elliptic function v(y) found by Forbes and a new fitting function t(y)2 for the Fowler-Nordheim (FN) equation. With these improved correction factors, field emission of electrons from a cathode surface is much closer to the prediction of the exact FN formula derived by Murphy and Good. This work was supported in part by both the U.S. Department of Defense under Grant No. FA9451-07-C-0025 and the U.S. Department of Energy under Grant No. DE-SC0004436.

  3. An improved equilibrium-kinetics speciation algorithm for redox reactions in variably saturated subsurface flow systems

    NASA Astrophysics Data System (ADS)

    Xu, Tianfu; Pruess, Karsten; Brimhall, George

    1999-07-01

    Reactive chemical transport occurs in a variety of geochemical environments, and over a broad range of space and time scales. Efficiency of the chemical speciation and water-rock-gas interaction calculations is important for modeling field-scale multidimensional reactive transport problems. An improved efficient model, REACT, for simulating water-rock-gas interaction under equilibrium and kinetic conditions, has been developed. In this model, equilibrium and kinetic reactions are solved simultaneously by Newton-Raphson iteration. The REACT speciation model was coupled with the multidimensional nonisothermal multiphase flow and mass transport code TOUGH2, resulting in the general purpose reactive chemical transport simulator TOUGHREACT. An application to supergene copper enrichment of a typical copper protore that includes the sulfide minerals pyrite (FeS 2) and chalcopyrite (CuFeS 2) is presented. The efficiency and convergence of the present model is demonstrated from this numerically difficult application that involves very large variations in the concentrations of oxygen, and sulfide and sulfate species. TOUGHREACT provides a detailed description of water-rock-gas interactions during fully transient, multiphase, nonisothermal flow and transport in hydrologically and geochemically heterogeneous media. The code is helpful for assessment of acid mine drainage remediation, geothermal convection, waste disposal, contaminant transport and water quality.

  4. Improving cardiomyocyte model fidelity and utility via dynamic electrophysiology protocols and optimization algorithms.

    PubMed

    Krogh-Madsen, Trine; Sobie, Eric A; Christini, David J

    2016-05-01

    Mathematical models of cardiac electrophysiology are instrumental in determining mechanisms of cardiac arrhythmias. However, the foundation of a realistic multiscale heart model is only as strong as the underlying cell model. While there have been myriad advances in the improvement of cellular-level models, the identification of model parameters, such as ion channel conductances and rate constants, remains a challenging problem. The primary limitations to this process include: (1) such parameters are usually estimated from data recorded using standard electrophysiology voltage-clamp protocols that have not been developed with model building in mind, and (2) model parameters are typically tuned manually to subjectively match a desired output. Over the last decade, methods aimed at overcoming these disadvantages have emerged. These approaches include the use of optimization or fitting tools for parameter estimation and incorporating more extensive data for output matching. Here, we review recent advances in parameter estimation for cardiomyocyte models, focusing on the use of more complex electrophysiology protocols and global search heuristics. We also discuss future applications of such parameter identification, including development of cell-specific and patient-specific mathematical models to investigate arrhythmia mechanisms and predict therapy strategies. PMID:26661516

  5. Introducing a framework to improve estimation of actual evapotranspiration using MODIS images with SEBAL algorithm

    NASA Astrophysics Data System (ADS)

    Mianabadi, Ameneh; Alizadeh, Amin; Sanaeinejad, Hossein; Ghahraman, Bijan; Davary, Kamran; Coenders-Gerrits, Miriam

    2015-04-01

    To have an accurate estimation of actual evapotranspiration, it is a good idea to use every-day images of MODIS. But under clouded condition, it is difficult to have appropriate images and also it is time-consuming to interpret all those images. Therefore, in this paper, we tried to choose the appropriate images to improve estimation of actual evapotranspiration. For this purpose, we introduced a framework to choose appropriate dates to produce best estimation of actual evapotranspiration. On the other hand, finding the location of dry (hot pixel) and wet (cold pixel) endpoints of evapotranspiration spectrum is so important. We dealt with this problem by employing the statistical procedure for automated selection of cold and hot pixels. We also visually reviewed the location of hot and cold pixels using land cover image to ensure that the most appropriate pixels had been selected. To integrate evapotranspiration over time, the linear and spline interpolation techniques were applied. Also, based on the precipitation rates during 5 days before the date of image and the mean seasonal amount of evapotranspiration, we found a logarithmic equation to produce the best estimation of evapotranspiration during the given time. Results showed that the logarithmic equation could produce more accurate estimation of evapotranspiration rather than linear interpolation.

  6. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series.

    SciTech Connect

    Quirk, Thomas, J., IV

    2004-08-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.

  7. An improved wind retrieval algorithm for the HY-2A scatterometer

    NASA Astrophysics Data System (ADS)

    Wang, Zhixiong; Zhao, Chaofang; Zou, Juhong; Xie, Xuetong; Zhang, Yi; Lin, Mingsen

    2015-09-01

    Since January 2012, the National Satellite Ocean Application Service has released operational wind products from the HY-2A scatterometer (HY2-SCAT), using the maximum-likelihood estimation (MLE) method with a median filter. However, the quality of the winds retrieved from HY2-SCAT depends on the sub-satellite cross-track location, and poor azimuth separation in the nadir region causes particularly low-quality wind products in this region. However, an improved scheme, i.e., a multiple solution scheme (MSS) with a two-dimensional variational analysis method (2DVAR), has been proposed by the Royal Netherlands Meteorological Institute to overcome such problems. The present study used the MSS in combination with a 2DVAR technique to retrieve wind data from HY2-SCAT observations. The parameter of the empirical probability function, used to indicate the probability of each ambiguous solution being the "true" wind, was estimated based on HY2-SCAT data, and the 2DVAR method used to remove ambiguity in the wind direction. A comparison between MSS and ECMWF winds showed larger deviations at both low wind speeds (below 4 m/s) and high wind speeds (above 17 m/s), whereas the wind direction exhibited lower bias and good stability, even at high wind speeds greater than 24 m/s. The two HY2-SCAT wind data sets, retrieved by the standard MLE and the MSS procedures were compared with buoy observations. The RMS error of wind speed and direction were 1.3 m/s and 17.4°, and 1.3 m/s and 24.0° for the MSS and MLE wind data, respectively, indicating that MSS wind data had better agreement with the buoy data. Furthermore, the distributions of wind fields for a case study of typhoon Soulik were compared, which showed that MSS winds were spatially more consistent and meteorologically better balanced than MLE winds.

  8. New algorithm for integration between wireless microwave sensor network and radar for improved rainfall measurement and mapping

    NASA Astrophysics Data System (ADS)

    Liberman, Y.; Samuels, R.; Alpert, P.; Messer, H.

    2014-10-01

    One of the main challenges for meteorological and hydrological modelling is accurate rainfall measurement and mapping across time and space. To date, the most effective methods for large-scale rainfall estimates are radar, satellites, and, more recently, received signal level (RSL) measurements derived from commercial microwave networks (CMNs). While these methods provide improved spatial resolution over traditional rain gauges, they have their limitations as well. For example, wireless CMNs, which are comprised of microwave links (ML), are dependant upon existing infrastructure and the ML' arbitrary distribution in space. Radar, on the other hand, is known in its limitation for accurately estimating rainfall in urban regions, clutter areas and distant locations. In this paper the pros and cons of the radar and ML methods are considered in order to develop a new algorithm for improving rainfall measurement and mapping, which is based on data fusion of the different sources. The integration is based on an optimal weighted average of the two data sets, taking into account location, number of links, rainfall intensity and time step. Our results indicate that, by using the proposed new method, we not only generate more accurate 2-D rainfall reconstructions, compared with actual rain intensities in space, but also the reconstructed maps are extended to the maximum coverage area. By inspecting three significant rain events, we show that our method outperforms CMNs or the radar alone in rain rate estimation, almost uniformly, both for instantaneous spatial measurements, as well as in calculating total accumulated rainfall. These new improved 2-D rainfall maps, as well as the accurate rainfall measurements over large areas at sub-hourly timescales, will allow for improved understanding, initialization, and calibration of hydrological and meteorological models mainly necessary for water resource management and planning.

  9. An improved Bayesian tensor regularization and sampling algorithm to track neuronal fiber pathways in the language circuit

    PubMed Central

    Mishra, Arabinda; Anderson, Adam W.; Wu, Xi; Gore, John C.; Ding, Zhaohua

    2010-01-01

    ., “Improved fiber tractography with Bayesian tensor regularization,” Neuroimage 31(3), 1061–1074 (2006)] and Friman’s stochastic approach [O. Friman et al., “A Bayesian approach for stochastic white matter tractography,” IEEE Trans. Med. Imaging 25(8), 965–978 (2006)]. Overall performance of the approach is found to be superior to above two methods, particularly when the signal-to-noise ratio was low. Conclusions: The authors observed that an adaptive sampling of the tensor element vectors, estimated as a function of the variance in a Bayesian framework, can effectively delineate neuronal fibers to analyze the structure-function relationship in human brain. The simulated and in vivo results are in good agreement with the theoretical aspects of the algorithm. PMID:20879588

  10. Evaluation of improvement of diffuse optical imaging of brain function by high-density probe arrangements and imaging algorithms

    NASA Astrophysics Data System (ADS)

    Sakakibara, Yusuke; Kurihara, Kazuki; Okada, Eiji

    2016-04-01

    Diffuse optical imaging has been applied to measure the localized hemodynamic responses to brain activation. One of the serious problems with diffuse optical imaging is the limitation of the spatial resolution caused by the sparse probe arrangement and broadened spatial sensitivity profile for each probe pair. High-density probe arrangements and an image reconstruction algorithm considering the broadening of the spatial sensitivity can improve the spatial resolution of the image. In this study, the diffuse optical imaging of the absorption change in the brain is simulated to evaluate the effect of the high-density probe arrangements and imaging methods. The localization error, equivalent full-width half maximum and circularity of the absorption change in the image obtained by the mapping and reconstruction methods from the data measured by five probe arrangements are compared to quantitatively evaluate the imaging methods and probe arrangements. The simple mapping method is sufficient for the density of the measurement points up to the double-density probe arrangement. The image reconstruction method considering the broadening of the spatial sensitivity of the probe pairs can effectively improve the spatial resolution of the image obtained from the probe arrangements higher than the quadruple density, in which the distance between the neighboring measurement points is 10.6 mm.

  11. Demonstration of on Sky Contrast Improvement Using the Modified Gerchberg-Saxton Algorithm at the Palomar Observatory

    NASA Technical Reports Server (NTRS)

    Burruss, Rick S.; Serabyn, Eugene; Mawet, Dimitri P.; Roberts, Jennifer E.; Hickey, Jeffrey P.; Rykoski, Kevin; Bikkannavar, Siddarayappa; Crepp, Justin R.

    2010-01-01

    We have successfully demonstrated significant improvements in the high contrast detection limit of the Well-Corrected Subaperture (WCS) using the Autonomous Phase Retrieval Calibration (APRC) software package developed at the Jet Propulsion Laboratory (JPL) for the Palomar adaptive optics instrument (PALAO). APRC utilizes the Modified Gerchberg-Saxton (MGS) wavefront sensing algorithm, also developed at JPL. The WCS delivers such excellent correction of the atmosphere that non-common path (NCP) wavefront errors not sensed by PALAO but present at the coronagraphic image plane begin to factor heavily as a limit to contrast. We have implemented the APRC program to reduce these NCP wavefront errors from 110 nm to 35 nm (rms) in the lab, and we have extended these exceptional results to targets on the sky for the first time, leading to a significant suppression of speckle noise. Consequently we now report a contrast level of very nearly 1x10(exp -4) at separations of 2 lambda/D before the data is post processed. We describe here the major components of our instrument, the work done to improve the NCP wavefront errors, and the ensuing excellent on sky results, including the detection of the three exoplanets orbiting the star HR8799.

  12. Does Pluto have a substantial atmosphere

    SciTech Connect

    Trafton, L.

    1980-01-01

    The presence of CH4 ice on Pluto implies that Pluto may have a substantial atmosphere consisting of heavy gases. Without such an atmosphere, sublimation of the CH4 ice would be so rapid on a cosmogonic time scale that either such an atmosphere would soon develop through the exposure of gases trapped in the CH4 ice or else the surface CH4 ice would soon be all sublimated away as other, more stable, ices became exposed. If such stable ices were present from the beginning, the existence of CH4 frosts would also imply that Pluto's present atmosphere contains a remnant of its primordial atmosphere.

  13. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  14. Improved Power System Stability Using Backtracking Search Algorithm for Coordination Design of PSS and TCSC Damping Controller.

    PubMed

    Niamul Islam, Naz; Hannan, M A; Mohamed, Azah; Shareef, Hussain

    2016-01-01

    Power system oscillation is a serious threat to the stability of multimachine power systems. The coordinated control of power system stabilizers (PSS) and thyristor-controlled series compensation (TCSC) damping controllers is a commonly used technique to provide the required damping over different modes of growing oscillations. However, their coordinated design is a complex multimodal optimization problem that is very hard to solve using traditional tuning techniques. In addition, several limitations of traditionally used techniques prevent the optimum design of coordinated controllers. In this paper, an alternate technique for robust damping over oscillation is presented using backtracking search algorithm (BSA). A 5-area 16-machine benchmark power system is considered to evaluate the design efficiency. The complete design process is conducted in a linear time-invariant (LTI) model of a power system. It includes the design formulation into a multi-objective function from the system eigenvalues. Later on, nonlinear time-domain simulations are used to compare the damping performances for different local and inter-area modes of power system oscillations. The performance of the BSA technique is compared against that of the popular particle swarm optimization (PSO) for coordinated design efficiency. Damping performances using different design techniques are compared in term of settling time and overshoot of oscillations. The results obtained verify that the BSA-based design improves the system stability significantly. The stability of the multimachine power system is improved by up to 74.47% and 79.93% for an inter-area mode and a local mode of oscillation, respectively. Thus, the proposed technique for coordinated design has great potential to improve power system stability and to maintain its secure operation. PMID:26745265

  15. Improved Power System Stability Using Backtracking Search Algorithm for Coordination Design of PSS and TCSC Damping Controller

    PubMed Central

    Niamul Islam, Naz; Hannan, M. A.; Mohamed, Azah; Shareef, Hussain

    2016-01-01

    Power system oscillation is a serious threat to the stability of multimachine power systems. The coordinated control of power system stabilizers (PSS) and thyristor-controlled series compensation (TCSC) damping controllers is a commonly used technique to provide the required damping over different modes of growing oscillations. However, their coordinated design is a complex multimodal optimization problem that is very hard to solve using traditional tuning techniques. In addition, several limitations of traditionally used techniques prevent the optimum design of coordinated controllers. In this paper, an alternate technique for robust damping over oscillation is presented using backtracking search algorithm (BSA). A 5-area 16-machine benchmark power system is considered to evaluate the design efficiency. The complete design process is conducted in a linear time-invariant (LTI) model of a power system. It includes the design formulation into a multi-objective function from the system eigenvalues. Later on, nonlinear time-domain simulations are used to compare the damping performances for different local and inter-area modes of power system oscillations. The performance of the BSA technique is compared against that of the popular particle swarm optimization (PSO) for coordinated design efficiency. Damping performances using different design techniques are compared in term of settling time and overshoot of oscillations. The results obtained verify that the BSA-based design improves the system stability significantly. The stability of the multimachine power system is improved by up to 74.47% and 79.93% for an inter-area mode and a local mode of oscillation, respectively. Thus, the proposed technique for coordinated design has great potential to improve power system stability and to maintain its secure operation. PMID:26745265

  16. [An Improved Empirical Mode Decomposition Algorithm for Phonocardiogram Signal De-noising and Its Application in S1/S2 Extraction].

    PubMed

    Gong, Jing; Nie, Shengdong; Wang, Yuanjun

    2015-10-01

    In this paper, an improved empirical mode decomposition (EMD) algorithm for phonocardiogram (PCG) signal de-noising is proposed. Based on PCG signal processing theory, the S1/S2 components can be extracted by combining the improved EMD-Wavelet algorithm and Shannon energy envelope algorithm. Firstly, by applying EMD-Wavelet algorithm for pre-processing, the PCG signal was well filtered. Then, the filtered PCG signal was saved and applied in the following processing steps. Secondly, time domain features, frequency domain features and energy envelope of the each intrinsic mode function's (IMF) were computed. Based on the time frequency domain features of PCG's IMF components which were extracted from the EMD algorithm and energy envelope of the PCG, the S1/S2 components were pinpointed accurately. Meanwhile, a detecting fixed method, which was based on the time domain processing, was proposed to amend the detection results. Finally, to test the performance of the algorithm proposed in this paper, a series of experiments was contrived. The experiments with thirty samples were tested for validating the effectiveness of the new method. Results of test experiments revealed that the accuracy for recognizing S1/S2 components was as high as 99.75%. Comparing the results of the method proposed in this paper with those of traditional algorithm, the detection accuracy was increased by 5.56%. The detection results showed that the algorithm described in this paper was effective and accurate. The work described in this paper will be utilized in the further studying on identity recognition.

  17. Performance Improvement of the Goertzel Algorithm in Estimating of Protein Coding Regions Using Modified Anti-notch Filter and Linear Predictive Coding Model

    PubMed Central

    Farsani, Mahsa Saffari; Sahhaf, Masoud Reza Aghabozorgi; Abootalebi, Vahid

    2016-01-01

    The aim of this paper is to improve the performance of the conventional Goertzel algorithm in determining the protein coding regions in deoxyribonucleic acid (DNA) sequences. First, the symbolic DNA sequences are converted into numerical signals using electron ion interaction potential method. Then by combining the modified anti-notch filter and linear predictive coding model, we proposed an efficient algorithm to achieve the performance improvement in the Goertzel algorithm for estimating genetic regions. Finally, a thresholding method is applied to precisely identify the exon and intron regions. The proposed algorithm is applied to several genes, including genes available in databases BG570 and HMR195 and the results are compared to other methods based on the nucleotide level evaluation criteria. Results demonstrate that our proposed method reduces the number of incorrect nucleotides which are estimated to be in the noncoding region. In addition, the area under the receiver operating characteristic curve has improved by the factor of 1.35 and 1.12 in HMR195 and BG570 datasets respectively, in comparison with the conventional Goertzel algorithm. PMID:27563569

  18. Performance Improvement of the Goertzel Algorithm in Estimating of Protein Coding Regions Using Modified Anti-notch Filter and Linear Predictive Coding Model.

    PubMed

    Farsani, Mahsa Saffari; Sahhaf, Masoud Reza Aghabozorgi; Abootalebi, Vahid

    2016-01-01

    The aim of this paper is to improve the performance of the conventional Goertzel algorithm in determining the protein coding regions in deoxyribonucleic acid (DNA) sequences. First, the symbolic DNA sequences are converted into numerical signals using electron ion interaction potential method. Then by combining the modified anti-notch filter and linear predictive coding model, we proposed an efficient algorithm to achieve the performance improvement in the Goertzel algorithm for estimating genetic regions. Finally, a thresholding method is applied to precisely identify the exon and intron regions. The proposed algorithm is applied to several genes, including genes available in databases BG570 and HMR195 and the results are compared to other methods based on the nucleotide level evaluation criteria. Results demonstrate that our proposed method reduces the number of incorrect nucleotides which are estimated to be in the noncoding region. In addition, the area under the receiver operating characteristic curve has improved by the factor of 1.35 and 1.12 in HMR195 and BG570 datasets respectively, in comparison with the conventional Goertzel algorithm. PMID:27563569

  19. Coupling the NASA-CASA ecosystem model with a hydrologic routing algorithm for improved water management in Yosemite National Park

    NASA Astrophysics Data System (ADS)

    Teaby, A.; Johnson, E. R.; Griffin, M.; Carrillo, C.; Kannan, T.; Shupe, J. W.; Schmidt, C.

    2013-12-01

    Historic trends reveal extreme precipitation variability within the Yosemite National Park (YNP) geographic region. While California obtains greater than half of its annual water supply from the Sierra Nevada, snowpack, precipitation, and runoff can fluctuate between less than 50% and greater than 200% of climatological averages. Advances in hydrological modeling are crucial to improving water-use efficiency at the local, state, and national levels. The NASA Carnegie Ames Stanford Approach (CASA) is a global simulation model that combines multi-year satellite, climate, and other land surface databases to estimate biosphere-atmosphere exchange of energy, water, and trace gases from plants and soils. By coupling CASA with a Hydrological Routing Algorithm known as HYDRA, it is possible to calculate current water availability and observe hydrological trends within YNP. Satellite-derived inputs such as surface evapotranspiration, temperature, precipitation, land cover, and elevation were included to create a valuable decision support tool for YNP's water resource managers. These results will be of enhanced importance given current efforts to restore 81 miles of the Merced River within the park's boundary. Validations of model results were conducted using in situ stream gage measurements. The model accurately simulated observed streamflow values, achieving a relatively strong Nash-Sutcliffe model efficiency coefficient. This geospatial assessment provides a standardized method which may be repeated in both national and international water-stressed regions.

  20. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    PubMed

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-01-01

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large. PMID:27070603