Sample records for stable on-line algorithm

  1. Advanced scatter search approach and its application in a sequencing problem of mixed-model assembly lines in a case company

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Wang, Wen-xi; Zhu, Ke-ren; Zhang, Chao-yong; Rao, Yun-qing

    2014-11-01

    Mixed-model assembly line sequencing is significant in reducing the production time and overall cost of production. To improve production efficiency, a mathematical model aiming simultaneously to minimize overtime, idle time and total set-up costs is developed. To obtain high-quality and stable solutions, an advanced scatter search approach is proposed. In the proposed algorithm, a new diversification generation method based on a genetic algorithm is presented to generate a set of potentially diverse and high-quality initial solutions. Many methods, including reference set update, subset generation, solution combination and improvement methods, are designed to maintain the diversification of populations and to obtain high-quality ideal solutions. The proposed model and algorithm are applied and validated in a case company. The results indicate that the proposed advanced scatter search approach is significant for mixed-model assembly line sequencing in this company.

  2. A Fast Hermite Transform★

    PubMed Central

    Leibon, Gregory; Rockmore, Daniel N.; Park, Wooram; Taintor, Robert; Chirikjian, Gregory S.

    2008-01-01

    We present algorithms for fast and stable approximation of the Hermite transform of a compactly supported function on the real line, attainable via an application of a fast algebraic algorithm for computing sums associated with a three-term relation. Trade-offs between approximation in bandlimit (in the Hermite sense) and size of the support region are addressed. Numerical experiments are presented that show the feasibility and utility of our approach. Generalizations to any family of orthogonal polynomials are outlined. Applications to various problems in tomographic reconstruction, including the determination of protein structure, are discussed. PMID:20027202

  3. Adaptive piezoelectric sensoriactuator

    NASA Technical Reports Server (NTRS)

    Clark, Jr., Robert L. (Inventor); Vipperman, Jeffrey S. (Inventor); Cole, Daniel G. (Inventor)

    1996-01-01

    An adaptive algorithm implemented in digital or analog form is used in conjunction with a voltage controlled amplifier to compensate for the feedthrough capacitance of piezoelectric sensoriactuator. The mechanical response of the piezoelectric sensoriactuator is resolved from the electrical response by adaptively altering the gain imposed on the electrical circuit used for compensation. For wideband, stochastic input disturbances, the feedthrough capacitance of the sensoriactuator can be identified on-line, providing a means of implementing direct-rate-feedback control in analog hardware. The device is capable of on-line system health monitoring since a quasi-stable dynamic capacitance is indicative of sustained health of the piezoelectric element.

  4. Full cycle rapid scan EPR deconvolution algorithm.

    PubMed

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Full cycle rapid scan EPR deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan.

  6. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid

    PubMed Central

    Byambasuren, Bat-erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-01-01

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results. PMID:26907274

  7. Inspection Robot Based Mobile Sensing and Power Line Tracking for Smart Grid.

    PubMed

    Byambasuren, Bat-Erdene; Kim, Donghan; Oyun-Erdene, Mandakh; Bold, Chinguun; Yura, Jargalbaatar

    2016-02-19

    Smart sensing and power line tracking is very important in a smart grid system. Illegal electricity usage can be detected by remote current measurement on overhead power lines using an inspection robot. There is a need for accurate detection methods of illegal electricity usage. Stable and correct power line tracking is a very prominent issue. In order to correctly track and make accurate measurements, the swing path of a power line should be previously fitted and predicted by a mathematical function using an inspection robot. After this, the remote inspection robot can follow the power line and measure the current. This paper presents a new power line tracking method using parabolic and circle fitting algorithms for illegal electricity detection. We demonstrate the effectiveness of the proposed tracking method by simulation and experimental results.

  8. Direct Adaptive Rejection of Vortex-Induced Disturbances for a Powered SPAR Platform

    NASA Technical Reports Server (NTRS)

    VanZwieten, Tannen S.; Balas, Mark J.; VanZwieten, James H.; Driscoll, Frederick R.

    2009-01-01

    The Rapidly Deployable Stable Platform (RDSP) is a novel vessel designed to be a reconfigurable, stable at-sea platform. It consists of a detachable catamaran and spar, performing missions with the spar extending vertically below the catamaran and hoisting it completely out of the water. Multiple thrusters located along the spar allow it to be actively controlled in this configuration. A controller is presented in this work that uses an adaptive feedback algorithm in conjunction with Direct Adaptive Disturbance Rejection (DADR) to mitigate persistent, vortex-induced disturbances. Given the frequency of a disturbance, the nominal DADR scheme adaptively compensates for its unknown amplitude and phase. This algorithm is extended to adapt to a disturbance frequency that is only coarsely known by including a Phase Locked Loop (PLL). The PLL improves the frequency estimate on-line, allowing the modified controller to reduce vortex-induced motions by more than 95% using achievable thrust inputs.

  9. Communication-avoiding symmetric-indefinite factorization

    DOE PAGES

    Ballard, Grey Malone; Becker, Dulcenia; Demmel, James; ...

    2014-11-13

    We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A=PLTL TP T where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. Asmore » a result, the current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.« less

  10. Communication-avoiding symmetric-indefinite factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Grey Malone; Becker, Dulcenia; Demmel, James

    We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A=PLTL TP T where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. Asmore » a result, the current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.« less

  11. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  12. Integration of On-Line and Off-Line Diagnostic Algorithms for Aircraft Engine Health Management

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2007-01-01

    This paper investigates the integration of on-line and off-line diagnostic algorithms for aircraft gas turbine engines. The on-line diagnostic algorithm is designed for in-flight fault detection. It continuously monitors engine outputs for anomalous signatures induced by faults. The off-line diagnostic algorithm is designed to track engine health degradation over the lifetime of an engine. It estimates engine health degradation periodically over the course of the engine s life. The estimate generated by the off-line algorithm is used to update the on-line algorithm. Through this integration, the on-line algorithm becomes aware of engine health degradation, and its effectiveness to detect faults can be maintained while the engine continues to degrade. The benefit of this integration is investigated in a simulation environment using a nonlinear engine model.

  13. Adaptive fuzzy leader clustering of complex data sets in pattern recognition

    NASA Technical Reports Server (NTRS)

    Newton, Scott C.; Pemmaraju, Surya; Mitra, Sunanda

    1992-01-01

    A modular, unsupervised neural network architecture for clustering and classification of complex data sets is presented. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns on-line in a stable and efficient manner. The initial classification is performed in two stages: a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from fuzzy C-means system equations for the centroids and the membership values. The AFLC algorithm is applied to the Anderson Iris data and laser-luminescent fingerprint image data. It is concluded that the AFLC algorithm successfully classifies features extracted from real data, discrete or continuous.

  14. Careful Selection of Reference Genes Is Required for Reliable Performance of RT-qPCR in Human Normal and Cancer Cell Lines

    PubMed Central

    Jacob, Francis; Guertler, Rea; Naim, Stephanie; Nixdorf, Sheri; Fedier, André; Hacker, Neville F.; Heinzelmann-Schwarz, Viola

    2013-01-01

    Reverse Transcription - quantitative Polymerase Chain Reaction (RT-qPCR) is a standard technique in most laboratories. The selection of reference genes is essential for data normalization and the selection of suitable reference genes remains critical. Our aim was to 1) review the literature since implementation of the MIQE guidelines in order to identify the degree of acceptance; 2) compare various algorithms in their expression stability; 3) identify a set of suitable and most reliable reference genes for a variety of human cancer cell lines. A PubMed database review was performed and publications since 2009 were selected. Twelve putative reference genes were profiled in normal and various cancer cell lines (n = 25) using 2-step RT-qPCR. Investigated reference genes were ranked according to their expression stability by five algorithms (geNorm, Normfinder, BestKeeper, comparative ΔCt, and RefFinder). Our review revealed 37 publications, with two thirds patient samples and one third cell lines. qPCR efficiency was given in 68.4% of all publications, but only 28.9% of all studies provided RNA/cDNA amount and standard curves. GeNorm and Normfinder algorithms were used in 60.5% in combination. In our selection of 25 cancer cell lines, we identified HSPCB, RRN18S, and RPS13 as the most stable expressed reference genes. In the subset of ovarian cancer cell lines, the reference genes were PPIA, RPS13 and SDHA, clearly demonstrating the necessity to select genes depending on the research focus. Moreover, a cohort of at least three suitable reference genes needs to be established in advance to the experiments, according to the guidelines. For establishing a set of reference genes for gene normalization we recommend the use of ideally three reference genes selected by at least three stability algorithms. The unfortunate lack of compliance to the MIQE guidelines reflects that these need to be further established in the research community. PMID:23554992

  15. A parallel algorithm for viewshed analysis in three-dimensional Digital Earth

    NASA Astrophysics Data System (ADS)

    Feng, Wang; Gang, Wang; Deji, Pan; Yuan, Liu; Liuzhong, Yang; Hongbo, Wang

    2015-02-01

    Viewshed analysis, often supported by geographic information systems, is widely used in the three-dimensional (3D) Digital Earth system. Many of the analyzes involve the siting of features and real-timedecision-making. Viewshed analysis is usually performed at a large scale, which poses substantial computational challenges, as geographic datasets continue to become increasingly large. Previous research on viewshed analysis has been generally limited to a single data structure (i.e., DEM), which cannot be used to analyze viewsheds in complicated scenes. In this paper, a real-time algorithm for viewshed analysis in Digital Earth is presented using the parallel computing of graphics processing units (GPUs). An occlusion for each geometric entity in the neighbor space of the viewshed point is generated according to line-of-sight. The region within the occlusion is marked by a stencil buffer within the programmable 3D visualization pipeline. The marked region is drawn with red color concurrently. In contrast to traditional algorithms based on line-of-sight, the new algorithm, in which the viewshed calculation is integrated with the rendering module, is more efficient and stable. This proposed method of viewshed generation is closer to the reality of the virtual geographic environment. No DEM interpolation, which is seen as a computational burden, is needed. The algorithm was implemented in a 3D Digital Earth system (GeoBeans3D) with the DirectX application programming interface (API) and has been widely used in a range of applications.

  16. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  17. Improving the efficiency of dissolved oxygen control using an on-line control system based on a genetic algorithm evolving FWNN software sensor.

    PubMed

    Ruan, Jujun; Zhang, Chao; Li, Ya; Li, Peiyi; Yang, Zaizhi; Chen, Xiaohong; Huang, Mingzhi; Zhang, Tao

    2017-02-01

    This work proposes an on-line hybrid intelligent control system based on a genetic algorithm (GA) evolving fuzzy wavelet neural network software sensor to control dissolved oxygen (DO) in an anaerobic/anoxic/oxic process for treating papermaking wastewater. With the self-learning and memory abilities of neural network, handling the uncertainty capacity of fuzzy logic, analyzing local detail superiority of wavelet transform and global search of GA, this proposed control system can extract the dynamic behavior and complex interrelationships between various operation variables. The results indicate that the reasonable forecasting and control performances were achieved with optimal DO, and the effluent quality was stable at and below the desired values in real time. Our proposed hybrid approach proved to be a robust and effective DO control tool, attaining not only adequate effluent quality but also minimizing the demand for energy, and is easily integrated into a global monitoring system for purposes of cost management. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Exact simulation of max-stable processes.

    PubMed

    Dombry, Clément; Engelke, Sebastian; Oesting, Marco

    2016-06-01

    Max-stable processes play an important role as models for spatial extreme events. Their complex structure as the pointwise maximum over an infinite number of random functions makes their simulation difficult. Algorithms based on finite approximations are often inexact and computationally inefficient. We present a new algorithm for exact simulation of a max-stable process at a finite number of locations. It relies on the idea of simulating only the extremal functions, that is, those functions in the construction of a max-stable process that effectively contribute to the pointwise maximum. We further generalize the algorithm by Dieker & Mikosch (2015) for Brown-Resnick processes and use it for exact simulation via the spectral measure. We study the complexity of both algorithms, prove that our new approach via extremal functions is always more efficient, and provide closed-form expressions for their implementation that cover most popular models for max-stable processes and multivariate extreme value distributions. For simulation on dense grids, an adaptive design of the extremal function algorithm is proposed.

  19. Ship Detection in SAR Image Based on the Alpha-stable Distribution

    PubMed Central

    Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng

    2008-01-01

    This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794

  20. Line-drawing algorithms for parallel machines

    NASA Technical Reports Server (NTRS)

    Pang, Alex T.

    1990-01-01

    The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.

  1. On-line adaptive battery impedance parameter and state estimation considering physical principles in reduced order equivalent circuit battery models part 2. Parameter and state estimation

    NASA Astrophysics Data System (ADS)

    Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe

    2014-09-01

    Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored. These include: battery state of charge (SoC), battery state of health (capacity fade determination, SoH), and state of function (power fade determination, SoF). The second paper concludes the series by presenting a multi-stage online parameter identification technique based on a weighted recursive least quadratic squares parameter estimator to determine the parameters of the proposed battery model from the first paper during operation. A novel mutation based algorithm is developed to determine the nonlinear current dependency of the charge-transfer resistance. The influence of diffusion is determined by an on-line identification technique and verified on several batteries at different operation conditions. This method guarantees a short response time and, together with its fully recursive structure, assures a long-term stable monitoring of the battery parameters. The relative dynamic voltage prediction error of the algorithm is reduced to 2%. The changes of parameters are used to determine the states of the battery. The algorithm is real-time capable and can be implemented on embedded systems.

  2. Comparison of the Performance of the Warfarin Pharmacogenetics Algorithms in Patients with Surgery of Heart Valve Replacement and Heart Valvuloplasty.

    PubMed

    Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong

    2015-09-01

    A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Clinically relevant transmitted drug resistance to first line antiretroviral drugs and implications for recommendations.

    PubMed

    Monge, Susana; Guillot, Vicente; Alvarez, Marta; Chueca, Natalia; Stella, Natalia; Peña, Alejandro; Delgado, Rafael; Córdoba, Juan; Aguilera, Antonio; Vidal, Carmen; García, Federico

    2014-01-01

    The aim was to analyse trends in clinically relevant resistance to first-line antiretroviral drugs in Spain, applying the Stanford algorithm, and to compare these results with reported Transmitted Drug Resistance (TDR) defined by the 2009 update of the WHO SDRM list. We analysed 2781 sequences from ARV naive patients of the CoRIS cohort (Spain) between 2007-2011. Using the Stanford algorithm "Low-level resistance", "Intermediate resistance" and "High-level resistance" categories were considered as "Resistant". 70% of the TDR found using the WHO list were relevant for first-line treatment according to the Stanford algorithm. A total of 188 patients showed clinically relevant resistance to first-line ARVs [6.8% (95%Confidence Interval: 5.8-7.7)], and 221 harbored TDR using the WHO list [7.9% (6.9-9.0)]. Differences were due to a lower prevalence in clinically relevant resistance for NRTIs [2.3% (1.8-2.9) vs. 3.6% (2.9-4.3) by the WHO list] and PIs [0.8% (0.4-1.1) vs. 1.7% (1.2-2.2)], while it was higher for NNRTIs [4.6% (3.8-5.3) vs. 3.7% (3.0-4.7)]. While TDR remained stable throughout the study period, clinically relevant resistance to first line drugs showed a significant trend to a decline (p = 0.02). Prevalence of clinically relevant resistance to first line ARVs in Spain is decreasing, and lower than the one expected looking at TDR using the WHO list. Resistance to first-line PIs falls below 1%, so the recommendation of screening for TDR in the protease gene should be questioned in our setting. Cost-effectiveness studies need to be carried out to inform evidence-based recommendations.

  4. Calibration and Data Retrieval Algorithms for the NASA Langley/Ames Diode Laser Hygrometer for the NASA Trace-P Mission

    NASA Technical Reports Server (NTRS)

    Podolske, James R.; Sachse, Glen W.; Diskin, Glenn S.; Hipskino, R. Stephen (Technical Monitor)

    2002-01-01

    This paper describes the procedures and algorithms for the laboratory calibration and the field data retrieval of the NASA Langley / Ames Diode Laser Hygrometer as implemented during the NASA Trace-P mission during February to April 2000. The calibration is based on a NIST traceable dewpoint hygrometer using relatively high humidity and short pathlength. Two water lines of widely different strengths are used to increase the dynamic range of the instrument in the course of a flight. The laboratory results are incorporated into a numerical model of the second harmonic spectrum for each of the two spectral window regions using spectroscopic parameters from the HITRAN database and other sources, allowing water vapor retrieval at upper tropospheric and lower stratospheric temperatures and humidity levels. The data retrieval algorithm is simple, numerically stable, and accurate. A comparison with other water vapor instruments on board the NASA DC-8 and ER-2 aircraft is presented.

  5. A novel line segment detection algorithm based on graph search

    NASA Astrophysics Data System (ADS)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  6. A self-organized learning strategy for object recognition by an embedded line of attraction

    NASA Astrophysics Data System (ADS)

    Seow, Ming-Jung; Alex, Ann T.; Asari, Vijayan K.

    2012-04-01

    For humans, a picture is worth a thousand words, but to a machine, it is just a seemingly random array of numbers. Although machines are very fast and efficient, they are vastly inferior to humans for everyday information processing. Algorithms that mimic the way the human brain computes and learns may be the solution. In this paper we present a theoretical model based on the observation that images of similar visual perceptions reside in a complex manifold in an image space. The perceived features are often highly structured and hidden in a complex set of relationships or high-dimensional abstractions. To model the pattern manifold, we present a novel learning algorithm using a recurrent neural network. The brain memorizes information using a dynamical system made of interconnected neurons. Retrieval of information is accomplished in an associative sense. It starts from an arbitrary state that might be an encoded representation of a visual image and converges to another state that is stable. The stable state is what the brain remembers. In designing a recurrent neural network, it is usually of prime importance to guarantee the convergence in the dynamics of the network. We propose to modify this picture: if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented with an unknown encoded representation of a visual image belonging to a different category. That is, the identification of an instability mode is an indication that a presented pattern is far away from any stored pattern and therefore cannot be associated with current memories. These properties can be used to circumvent the plasticity-stability dilemma by using the fluctuating mode as an indicator to create new states. We capture this behavior using a novel neural architecture and learning algorithm, in which the system performs self-organization utilizing a stability mode and an instability mode for the dynamical system. Based on this observation we developed a self- organizing line attractor, which is capable of generating new lines in the feature space to learn unrecognized patterns. Experiments performed on UMIST pose database and CMU face expression variant database for face recognition have shown that the proposed nonlinear line attractor is able to successfully identify the individuals and it provided better recognition rate when compared to the state of the art face recognition techniques. Experiments on FRGC version 2 database has also provided excellent recognition rate in images captured in complex lighting environments. Experiments performed on the Japanese female face expression database and Essex Grimace database using the self organizing line attractor have also shown successful expression invariant face recognition. These results show that the proposed model is able to create nonlinear manifolds in a multidimensional feature space to distinguish complex patterns.

  7. Fast and stable algorithms for computing the principal square root of a complex matrix

    NASA Technical Reports Server (NTRS)

    Shieh, Leang S.; Lian, Sui R.; Mcinnis, Bayliss C.

    1987-01-01

    This note presents recursive algorithms that are rapidly convergent and more stable for finding the principal square root of a complex matrix. Also, the developed algorithms are utilized to derive the fast and stable matrix sign algorithms which are useful in developing applications to control system problems.

  8. Adaptive model reduction for continuous systems via recursive rational interpolation

    NASA Technical Reports Server (NTRS)

    Lilly, John H.

    1994-01-01

    A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.

  9. Application of the fractional Levy motion to precipitation data

    NASA Astrophysics Data System (ADS)

    Kuzuha, Y.; Tachinami, S.; Gomi, C.

    2012-12-01

    We applied the fractional Lévy motion model to precipitation data, referring to Lavallée (2004) and Lavallée (2008). The data we used were from the Global Preciptiation Climatology Centre (GPCC) monthly precipitation dataset. These data consist of 360 (longitude) × 180 (latitude) × 1336 (monthly, 1901-2012). First, we constructed four datasets: time series of average monthly precipitation of the top (maximum) 1000 precipitation observation stations, top 10, top 100, and top 500. Next, according to Lavallée (2004) and Lavallée (2008), using Fourier transformation, convolution (filtering) and inverse Fourier transformation, we obtained random variables Xt (Lavallée, 2004) from Yt (precipitation). We transformed from Yt to Xt. Finally, we fitted the Lévy law to Xt. As a preliminary result, we present examples of the values of the Lévy law parameters: alpha, beta, gamma, and delta for the "top 100" dataset. Parameters obtained were (1.17, 0.0, 257.6, 0.28; maximum likelihood), (1.10, 0.0, 250.0, -0.99; quantile algorithm), and (1.20, 0.0, 265.1, 0.57; empirical characteristic function algorithm). We used J. P. Nolan's algorithm. The values are quite sensitive to the algorithm that is used. At the Fall meeting, we will present considerations and results obtained using precipitation data other than those of the GPCC. J. P. Nolan, http://academic2.american.edu/~jpnolan/stable/stable.html Lavallée (2004), Stochastic modeling of climatic variability in dendrochronology, GRL, 31, L15202. Lavallée (2008), On the random nature of earthquake sources and ground motions; a unified theory, Advances in Geophysics, 50, chapter 16. Acknowledgement: We thank Dr. D. Lavallee for his comments and suggestions.; An example of results which we obtained. On a log-log plot, PDF of the Lévy law (red line) is more appropriate than the Gaussian law (blue line) in terms of heavy tail or extreme values. This is consistent with Lavallée (2004) and Lavallée (2008) who used slip distribution and climate (dendrochronology) data.

  10. Competitive learning with pairwise constraints.

    PubMed

    Covões, Thiago F; Hruschka, Eduardo R; Ghosh, Joydeep

    2013-01-01

    Constrained clustering has been an active research topic since the last decade. Most studies focus on batch-mode algorithms. This brief introduces two algorithms for on-line constrained learning, named on-line linear constrained vector quantization error (O-LCVQE) and constrained rival penalized competitive learning (C-RPCL). The former is a variant of the LCVQE algorithm for on-line settings, whereas the latter is an adaptation of the (on-line) RPCL algorithm to deal with constrained clustering. The accuracy results--in terms of the normalized mutual information (NMI)--from experiments with nine datasets show that the partitions induced by O-LCVQE are competitive with those found by the (batch-mode) LCVQE. Compared with this formidable baseline algorithm, it is surprising that C-RPCL can provide better partitions (in terms of the NMI) for most of the datasets. Also, experiments on a large dataset show that on-line algorithms for constrained clustering can significantly reduce the computational time.

  11. A unifying framework for rigid multibody dynamics and serial and parallel computational issues

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Jain, Abhinandan

    1989-01-01

    A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.

  12. A General, Adaptive, Roadmap-Based Algorithm for Protein Motion Computation.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2016-03-01

    Precious information on protein function can be extracted from a detailed characterization of protein equilibrium dynamics. This remains elusive in wet and dry laboratories, as function-modulating transitions of a protein between functionally-relevant, thermodynamically-stable and meta-stable structural states often span disparate time scales. In this paper we propose a novel, robotics-inspired algorithm that circumvents time-scale challenges by drawing analogies between protein motion and robot motion. The algorithm adapts the popular roadmap-based framework in robot motion computation to handle the more complex protein conformation space and its underlying rugged energy surface. Given known structures representing stable and meta-stable states of a protein, the algorithm yields a time- and energy-prioritized list of transition paths between the structures, with each path represented as a series of conformations. The algorithm balances computational resources between a global search aimed at obtaining a global view of the network of protein conformations and their connectivity and a detailed local search focused on realizing such connections with physically-realistic models. Promising results are presented on a variety of proteins that demonstrate the general utility of the algorithm and its capability to improve the state of the art without employing system-specific insight.

  13. Lining seam elimination algorithm and surface crack detection in concrete tunnel lining

    NASA Astrophysics Data System (ADS)

    Qu, Zhong; Bai, Ling; An, Shi-Quan; Ju, Fang-Rong; Liu, Ling

    2016-11-01

    Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.

  14. Validation of new satellite aerosol optical depth retrieval algorithm using Raman lidar observations at radiative transfer laboratory in Warsaw

    NASA Astrophysics Data System (ADS)

    Zawadzka, Olga; Stachlewska, Iwona S.; Markowicz, Krzysztof M.; Nemuc, Anca; Stebel, Kerstin

    2018-04-01

    During an exceptionally warm September of 2016, the unique, stable weather conditions over Poland allowed for an extensive testing of the new algorithm developed to improve the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI) aerosol optical depth (AOD) retrieval. The development was conducted in the frame of the ESA-ESRIN SAMIRA project. The new AOD algorithm aims at providing the aerosol optical depth maps over the territory of Poland with a high temporal resolution of 15 minutes. It was tested on the data set obtained between 11-16 September 2016, during which a day of relatively clean atmospheric background related to an Arctic airmass inflow was surrounded by a few days with well increased aerosol load of different origin. On the clean reference day, for estimating surface reflectance the AOD forecast available on-line via the Copernicus Atmosphere Monitoring Service (CAMS) was used. The obtained AOD maps were validated against AODs available within the Poland-AOD and AERONET networks, and with AOD values obtained from the PollyXT-UW lidar. of the University of Warsaw (UW).

  15. A shifted Jacobi collocation algorithm for wave type equations with non-local conservation conditions

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohammed A.

    2014-09-01

    In this paper, we propose an efficient spectral collocation algorithm to solve numerically wave type equations subject to initial, boundary and non-local conservation conditions. The shifted Jacobi pseudospectral approximation is investigated for the discretization of the spatial variable of such equations. It possesses spectral accuracy in the spatial variable. The shifted Jacobi-Gauss-Lobatto (SJ-GL) quadrature rule is established for treating the non-local conservation conditions, and then the problem with its initial and non-local boundary conditions are reduced to a system of second-order ordinary differential equations in temporal variable. This system is solved by two-stage forth-order A-stable implicit RK scheme. Five numerical examples with comparisons are given. The computational results demonstrate that the proposed algorithm is more accurate than finite difference method, method of lines and spline collocation approach

  16. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  17. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  18. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis.

    PubMed

    Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis.

  19. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis

    PubMed Central

    Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting “building blocks” into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191

  20. Algorithms for adaptive stochastic control for a class of linear systems

    NASA Technical Reports Server (NTRS)

    Toda, M.; Patel, R. V.

    1977-01-01

    Control of linear, discrete time, stochastic systems with unknown control gain parameters is discussed. Two suboptimal adaptive control schemes are derived: one is based on underestimating future control and the other is based on overestimating future control. Both schemes require little on-line computation and incorporate in their control laws some information on estimation errors. The performance of these laws is studied by Monte Carlo simulations on a computer. Two single input, third order systems are considered, one stable and the other unstable, and the performance of the two adaptive control schemes is compared with that of the scheme based on enforced certainty equivalence and the scheme where the control gain parameters are known.

  1. Clinically Relevant Transmitted Drug Resistance to First Line Antiretroviral Drugs and Implications for Recommendations

    PubMed Central

    Monge, Susana; Guillot, Vicente; Alvarez, Marta; Chueca, Natalia; Stella, Natalia; Peña, Alejandro; Delgado, Rafael; Córdoba, Juan; Aguilera, Antonio; Vidal, Carmen; García, Federico; CoRIS

    2014-01-01

    Background The aim was to analyse trends in clinically relevant resistance to first-line antiretroviral drugs in Spain, applying the Stanford algorithm, and to compare these results with reported Transmitted Drug Resistance (TDR) defined by the 2009 update of the WHO SDRM list. Methods We analysed 2781 sequences from ARV naive patients of the CoRIS cohort (Spain) between 2007–2011. Using the Stanford algorithm “Low-level resistance”, “Intermediate resistance” and “High-level resistance” categories were considered as “Resistant”. Results 70% of the TDR found using the WHO list were relevant for first-line treatment according to the Stanford algorithm. A total of 188 patients showed clinically relevant resistance to first-line ARVs [6.8% (95%Confidence Interval: 5.8–7.7)], and 221 harbored TDR using the WHO list [7.9% (6.9–9.0)]. Differences were due to a lower prevalence in clinically relevant resistance for NRTIs [2.3% (1.8–2.9) vs. 3.6% (2.9–4.3) by the WHO list] and PIs [0.8% (0.4–1.1) vs. 1.7% (1.2–2.2)], while it was higher for NNRTIs [4.6% (3.8–5.3) vs. 3.7% (3.0–4.7)]. While TDR remained stable throughout the study period, clinically relevant resistance to first line drugs showed a significant trend to a decline (p = 0.02). Conclusions Prevalence of clinically relevant resistance to first line ARVs in Spain is decreasing, and lower than the one expected looking at TDR using the WHO list. Resistance to first-line PIs falls below 1%, so the recommendation of screening for TDR in the protease gene should be questioned in our setting. Cost-effectiveness studies need to be carried out to inform evidence-based recommendations. PMID:24637804

  2. An algorithm for engineering regime shifts in one-dimensional dynamical systems

    NASA Astrophysics Data System (ADS)

    Tan, James P. L.

    2018-01-01

    Regime shifts are discontinuous transitions between stable attractors hosting a system. They can occur as a result of a loss of stability in an attractor as a bifurcation is approached. In this work, we consider one-dimensional dynamical systems where attractors are stable equilibrium points. Relying on critical slowing down signals related to the stability of an equilibrium point, we present an algorithm for engineering regime shifts such that a system may escape an undesirable attractor into a desirable one. We test the algorithm on synthetic data from a one-dimensional dynamical system with a multitude of stable equilibrium points and also on a model of the population dynamics of spruce budworms in a forest. The algorithm and other ideas discussed here contribute to an important part of the literature on exercising greater control over the sometimes unpredictable nature of nonlinear systems.

  3. Impact of different disassembly line balancing algorithms on the performance of dynamic kanban system for disassembly line

    NASA Astrophysics Data System (ADS)

    Kizilkaya, Elif A.; Gupta, Surendra M.

    2005-11-01

    In this paper, we compare the impact of different disassembly line balancing (DLB) algorithms on the performance of our recently introduced Dynamic Kanban System for Disassembly Line (DKSDL) to accommodate the vagaries of uncertainties associated with disassembly and remanufacturing processing. We consider a case study to illustrate the impact of various DLB algorithms on the DKSDL. The approach to the solution, scenario settings, results and the discussions of the results are included.

  4. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    PubMed Central

    Al-Mohammed, A. H.; Abido, M. A.

    2014-01-01

    This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191

  5. Active control of impulsive noise with symmetric α-stable distribution based on an improved step-size normalized adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yali; Zhang, Qizhi; Yin, Yixin

    2015-05-01

    In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.

  6. An Algorithm to Compress Line-transition Data for Radiative-transfer Calculations

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio E.

    2017-11-01

    Molecular line-transition lists are an essential ingredient for radiative-transfer calculations. With recent databases now surpassing the billion-line mark, handling them has become computationally prohibitive, due to both the required processing power and memory. Here I present a temperature-dependent algorithm to separate strong from weak line transitions, reformatting the large majority of the weaker lines into a cross-section data file, and retaining the detailed line-by-line information of the fewer strong lines. For any given molecule over the 0.3-30 μm range, this algorithm reduces the number of lines to a few million, enabling faster radiative-transfer computations without a significant loss of information. The final compression rate depends on how densely populated the spectrum is. I validate this algorithm by comparing Exomol’s HCN extinction-coefficient spectra between the complete (65 million line transitions) and compressed (7.7 million) line lists. Over the 0.6-33 μm range, the average difference between extinction-coefficient values is less than 1%. A Python/C implementation of this algorithm is open-source and available at https://github.com/pcubillos/repack. So far, this code handles the Exomol and HITRAN line-transition format.

  7. Welding deviation detection algorithm based on extremum of molten pool image contour

    NASA Astrophysics Data System (ADS)

    Zou, Yong; Jiang, Lipei; Li, Yunhua; Xue, Long; Huang, Junfen; Huang, Jiqiang

    2016-01-01

    The welding deviation detection is the basis of robotic tracking welding, but the on-line real-time measurement of welding deviation is still not well solved by the existing methods. There is plenty of information in the gas metal arc welding(GMAW) molten pool images that is very important for the control of welding seam tracking. The physical meaning for the curvature extremum of molten pool contour is revealed by researching the molten pool images, that is, the deviation information points of welding wire center and the molten tip center are the maxima and the local maxima of the contour curvature, and the horizontal welding deviation is the position difference of these two extremum points. A new method of weld deviation detection is presented, including the process of preprocessing molten pool images, extracting and segmenting the contours, obtaining the contour extremum points, and calculating the welding deviation, etc. Extracting the contours is the premise, segmenting the contour lines is the foundation, and obtaining the contour extremum points is the key. The contour images can be extracted with the method of discrete dyadic wavelet transform, which is divided into two sub contours including welding wire and molten tip separately. The curvature value of each point of the two sub contour lines is calculated based on the approximate curvature formula of multi-points for plane curve, and the two points of the curvature extremum are the characteristics needed for the welding deviation calculation. The results of the tests and analyses show that the maximum error of the obtained on-line welding deviation is 2 pixels(0.16 mm), and the algorithm is stable enough to meet the requirements of the pipeline in real-time control at a speed of less than 500 mm/min. The method can be applied to the on-line automatic welding deviation detection.

  8. On Stable Marriages and Greedy Matchings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manne, Fredrik; Naim, Md; Lerring, Hakon

    2016-12-11

    Research on stable marriage problems has a long and mathematically rigorous history, while that of exploiting greedy matchings in combinatorial scientific computing is a younger and less developed research field. In this paper we consider the relationships between these two areas. In particular we show that several problems related to computing greedy matchings can be formulated as stable marriage problems and as a consequence several recently proposed algorithms for computing greedy matchings are in fact special cases of well known algorithms for the stable marriage problem. However, in terms of implementations and practical scalable solutions on modern hardware, the greedymore » matching community has made considerable progress. We show that due to the strong relationship between these two fields many of these results are also applicable for solving stable marriage problems.« less

  9. Implementation of Nonlinear Control Laws for an Optical Delay Line

    NASA Technical Reports Server (NTRS)

    Hench, John J.; Lurie, Boris; Grogan, Robert; Johnson, Richard

    2000-01-01

    This paper discusses the implementation of a globally stable nonlinear controller algorithm for the Real-Time Interferometer Control System Testbed (RICST) brassboard optical delay line (ODL) developed for the Interferometry Technology Program at the Jet Propulsion Laboratory. The control methodology essentially employs loop shaping to implement linear control laws. while utilizing nonlinear elements as means of ameliorating the effects of actuator saturation in its coarse, main, and vernier stages. The linear controllers were implemented as high-order digital filters and were designed using Bode integral techniques to determine the loop shape. The nonlinear techniques encompass the areas of exact linearization, anti-windup control, nonlinear rate limiting and modal control. Details of the design procedure are given as well as data from the actual mechanism.

  10. An algorithm for power line detection and warning based on a millimeter-wave radar video.

    PubMed

    Ma, Qirong; Goshi, Darren S; Shih, Yi-Chi; Sun, Ming-Ting

    2011-12-01

    Power-line-strike accident is a major safety threat for low-flying aircrafts such as helicopters, thus an automatic warning system to power lines is highly desirable. In this paper we propose an algorithm for detecting power lines from radar videos from an active millimeter-wave sensor. Hough Transform is employed to detect candidate lines. The major challenge is that the radar videos are very noisy due to ground return. The noise points could fall on the same line which results in signal peaks after Hough Transform similar to the actual cable lines. To differentiate the cable lines from the noise lines, we train a Support Vector Machine to perform the classification. We exploit the Bragg pattern, which is due to the diffraction of electromagnetic wave on the periodic surface of power lines. We propose a set of features to represent the Bragg pattern for the classifier. We also propose a slice-processing algorithm which supports parallel processing, and improves the detection of cables in a cluttered background. Lastly, an adaptive algorithm is proposed to integrate the detection results from individual frames into a reliable video detection decision, in which temporal correlation of the cable pattern across frames is used to make the detection more robust. Extensive experiments with real-world data validated the effectiveness of our cable detection algorithm. © 2011 IEEE

  11. Generation of stable cell line by using chitosan as gene delivery system.

    PubMed

    Şalva, Emine; Turan, Suna Özbaş; Ekentok, Ceyda; Akbuğa, Jülide

    2016-08-01

    Establishing stable cell lines are useful tools to study the function of various genes and silence or induce the expression of a gene of interest. Nonviral gene transfer is generally preferred to generate stable cell lines in the manufacturing of recombinant proteins. In this study, we aimed to establish stable recombinant HEK-293 cell lines by transfection of chitosan complexes preparing with pDNA which contain LacZ and GFP genes. Chitosan which is a cationic polymer was used as gene delivery system. Stable HEK-293 cell lines were established by transfection of cells with complexes which were prepared with chitosan and pVitro-2 plasmid vector that contains neomycin drug resistance gene, beta gal and GFP genes. The transfection efficiency was shown with GFP expression in the cells using fluorescence microscopy. Beta gal protein expression in stable cells was examined by beta-galactosidase assay as enzymatically and X-gal staining method as histochemically. Full complexation was shown in the above of 1/1 ratio in the chitosan/pDNA complexes. The highest beta-galactosidase activity was obtained with transfection of chitosan complexes. Beta gal gene expression was 15.17 ng/ml in the stable cells generated by chitosan complexes. In addition, intensive blue color was observed depending on beta gal protein expression in the stable cell line with X-gal staining. We established a stable HEK-293 cell line that can be used for recombinant protein production or gene expression studies by transfecting the gene of interest.

  12. An improved NSGA - II algorithm for mixed model assembly line balancing

    NASA Astrophysics Data System (ADS)

    Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong

    2018-05-01

    Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.

  13. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  14. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  15. Trapping photons on the line: controllable dynamics of a quantum walk

    NASA Astrophysics Data System (ADS)

    Xue, Peng; Qin, Hao; Tang, Bao

    2014-04-01

    Optical interferometers comprising birefringent-crystal beam displacers, wave plates, and phase shifters serve as stable devices for simulating quantum information processes such as heralded coined quantum walks. Quantum walks are important for quantum algorithms, universal quantum computing circuits, quantum transport in complex systems, and demonstrating intriguing nonlinear dynamical quantum phenomena. We introduce fully controllable polarization-independent phase shifters in optical pathes in order to realize site-dependent phase defects. The effectiveness of our interferometer is demonstrated through realizing single-photon quantum-walk dynamics in one dimension. By applying site-dependent phase defects, the translational symmetry of an ideal standard quantum walk is broken resulting in localization effect in a quantum walk architecture. The walk is realized for different site-dependent phase defects and coin settings, indicating the strength of localization signature depends on the level of phase due to site-dependent phase defects and coin settings and opening the way for the implementation of a quantum-walk-based algorithm.

  16. A lateral guidance algorithm to reduce the post-aerobraking burn requirements for a lift-modulated orbital transfer vehicle. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Herman, G. C.

    1986-01-01

    A lateral guidance algorithm which controls the location of the line of intersection between the actual and desired orbital planes (the hinge line) is developed for the aerobraking phase of a lift-modulated orbital transfer vehicle. The on-board targeting algorithm associated with this lateral guidance algorithm is simple and concise which is very desirable since computation time and space are limited on an on-board flight computer. A variational equation which describes the movement of the hinge line is derived. Simple relationships between the plane error, the desired hinge line position, the position out-of-plane error, and the velocity out-of-plane error are found. A computer simulation is developed to test the lateral guidance algorithm for a variety of operating conditions. The algorithm does reduce the total burn magnitude needed to achieve the desired orbit by allowing the plane correction and perigee-raising burn to be combined in a single maneuver. The algorithm performs well under vacuum perigee dispersions, pot-hole density disturbance, and thick atmospheres. The results for many different operating conditions are presented.

  17. Theory and algorithms for image reconstruction on chords and within regions of interest

    NASA Astrophysics Data System (ADS)

    Zou, Yu; Pan, Xiaochuan; Sidky, Emilâ Y.

    2005-11-01

    We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.

  18. PCSYS: The optimal design integration system picture drawing system with hidden line algorithm capability for aerospace vehicle configurations

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Vanderburg, J. D.

    1977-01-01

    A vehicle geometric definition based upon quadrilateral surface elements to produce realistic pictures of an aerospace vehicle. The PCSYS programs can be used to visually check geometric data input, monitor geometric perturbations, and to visualize the complex spatial inter-relationships between the internal and external vehicle components. PCSYS has two major component programs. The between program, IMAGE, draws a complex aerospace vehicle pictorial representation based on either an approximate but rapid hidden line algorithm or without any hidden line algorithm. The second program, HIDDEN, draws a vehicle representation using an accurate but time consuming hidden line algorithm.

  19. The development of line-scan image recognition algorithms for the detection of frass on mature tomatoes

    USDA-ARS?s Scientific Manuscript database

    In this research, a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at two wavebands, 664 nm and 690 nm, for co...

  20. A Fault Location Algorithm for Two-End Series-Compensated Double-Circuit Transmission Lines Using the Distributed Parameter Line Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Ning; Gombos, Gergely; Mousavi, Mirrasoul J.

    A new fault location algorithm for two-end series-compensated double-circuit transmission lines utilizing unsynchronized two-terminal current phasors and local voltage phasors is presented in this paper. The distributed parameter line model is adopted to take into account the shunt capacitance of the lines. The mutual coupling between the parallel lines in the zero-sequence network is also considered. The boundary conditions under different fault types are used to derive the fault location formulation. The developed algorithm directly uses the local voltage phasors on the line side of series compensation (SC) and metal oxide varistor (MOV). However, when potential transformers are not installedmore » on the line side of SC and MOVs for the local terminal, these measurements can be calculated from the local terminal bus voltage and currents by estimating the voltages across the SC and MOVs. MATLAB SimPowerSystems is used to generate cases under diverse fault conditions to evaluating accuracy. The simulation results show that the proposed algorithm is qualified for practical implementation.« less

  1. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  2. An on-line modified least-mean-square algorithm for training neurofuzzy controllers.

    PubMed

    Tan, Woei Wan

    2007-04-01

    The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.

  3. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    PubMed

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  4. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  5. Extended volume coverage in helical cone-beam CT by using PI-line based BPF algorithm

    NASA Astrophysics Data System (ADS)

    Cho, Seungryong; Pan, Xiaochuan

    2007-03-01

    We compared data requirements of filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms based on PI-lines in helical cone-beam CT. Since the filtration process in FBP algorithm needs all the projection data of PI-lines for each view, the required detector size should be bigger than the size that can cover Tam-Danielsson (T-D) window to avoid data truncation. BPF algorithm, however, requires the projection data only within the T-D window, which means smaller detector size can be used to reconstruct the same image than that in FBP. In other words, a longer helical pitch can be obtained by using BPF algorithm without any truncation artifacts when a fixed detector size is given. The purpose of the work is to demonstrate numerically that extended volume coverage in helical cone-beam CT by using PI-line-based BPF algorithm can be achieved.

  6. Automated structure determination of proteins with the SAIL-FLYA NMR method.

    PubMed

    Takeda, Mitsuhiro; Ikeya, Teppei; Güntert, Peter; Kainosho, Masatsune

    2007-01-01

    The labeling of proteins with stable isotopes enhances the NMR method for the determination of 3D protein structures in solution. Stereo-array isotope labeling (SAIL) provides an optimal stereospecific and regiospecific pattern of stable isotopes that yields sharpened lines, spectral simplification without loss of information, and the ability to collect rapidly and evaluate fully automatically the structural restraints required to solve a high-quality solution structure for proteins up to twice as large as those that can be analyzed using conventional methods. Here, we describe a protocol for the preparation of SAIL proteins by cell-free methods, including the preparation of S30 extract and their automated structure analysis using the FLYA algorithm and the program CYANA. Once efficient cell-free expression of the unlabeled or uniformly labeled target protein has been achieved, the NMR sample preparation of a SAIL protein can be accomplished in 3 d. A fully automated FLYA structure calculation can be completed in 1 d on a powerful computer system.

  7. Topological quantum computation of the Dold-Thom functor

    NASA Astrophysics Data System (ADS)

    Ospina, Juan

    2014-05-01

    A possible topological quantum computation of the Dold-Thom functor is presented. The method that will be used is the following: a) Certain 1+1-topological quantum field theories valued in symmetric bimonoidal categories are converted into stable homotopical data, using a machinery recently introduced by Elmendorf and Mandell; b) we exploit, in this framework, two recent results (independent of each other) on refinements of Khovanov homology: our refinement into a module over the connective k-theory spectrum and a stronger result by Lipshitz and Sarkar refining Khovanov homology into a stable homotopy type; c) starting from the Khovanov homotopy the Dold-Thom functor is constructed; d) the full construction is formulated as a topological quantum algorithm. It is conjectured that the Jones polynomial can be described as the analytical index of certain Dirac operator defined in the context of the Khovanov homotopy using the Dold-Thom functor. As a line for future research is interesting to study the corresponding supersymmetric model for which the Khovanov-Dirac operator plays the role of a supercharge.

  8. Efficient and Stable Routing Algorithm Based on User Mobility and Node Density in Urban Vehicular Network.

    PubMed

    Al-Mayouf, Yusor Rafid Bahar; Ismail, Mahamod; Abdullah, Nor Fadzilah; Wahab, Ainuddin Wahid Abdul; Mahdi, Omar Adil; Khan, Suleman; Choo, Kim-Kwang Raymond

    2016-01-01

    Vehicular ad hoc networks (VANETs) are considered an emerging technology in the industrial and educational fields. This technology is essential in the deployment of the intelligent transportation system, which is targeted to improve safety and efficiency of traffic. The implementation of VANETs can be effectively executed by transmitting data among vehicles with the use of multiple hops. However, the intrinsic characteristics of VANETs, such as its dynamic network topology and intermittent connectivity, limit data delivery. One particular challenge of this network is the possibility that the contributing node may only remain in the network for a limited time. Hence, to prevent data loss from that node, the information must reach the destination node via multi-hop routing techniques. An appropriate, efficient, and stable routing algorithm must be developed for various VANET applications to address the issues of dynamic topology and intermittent connectivity. Therefore, this paper proposes a novel routing algorithm called efficient and stable routing algorithm based on user mobility and node density (ESRA-MD). The proposed algorithm can adapt to significant changes that may occur in the urban vehicular environment. This algorithm works by selecting an optimal route on the basis of hop count and link duration for delivering data from source to destination, thereby satisfying various quality of service considerations. The validity of the proposed algorithm is investigated by its comparison with ARP-QD protocol, which works on the mechanism of optimal route finding in VANETs in urban environments. Simulation results reveal that the proposed ESRA-MD algorithm shows remarkable improvement in terms of delivery ratio, delivery delay, and communication overhead.

  9. Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem.

    PubMed

    Burnecki, Krzysztof; Wylomanska, Agnieszka; Chechkin, Aleksei

    2015-01-01

    In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov-Smirnov test. In particular, it helps to distinguish between stable and Student's t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition.

  10. Discriminating between Light- and Heavy-Tailed Distributions with Limit Theorem

    PubMed Central

    Chechkin, Aleksei

    2015-01-01

    In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov–Smirnov test. In particular, it helps to distinguish between stable and Student’s t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition. PMID:26698863

  11. Universal single level implicit algorithm for gasdynamics

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.; Venkatapthy, E.

    1984-01-01

    A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.

  12. Script-independent text line segmentation in freestyle handwritten documents.

    PubMed

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  13. Complexity of line-seru conversion for different scheduling rules and two improved exact algorithms for the multi-objective optimization.

    PubMed

    Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei

    2016-01-01

    Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms.

  14. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  15. A lane line segmentation algorithm based on adaptive threshold and connected domain theory

    NASA Astrophysics Data System (ADS)

    Feng, Hui; Xu, Guo-sheng; Han, Yi; Liu, Yang

    2018-04-01

    Before detecting cracks and repairs on road lanes, it's necessary to eliminate the influence of lane lines on the recognition result in road lane images. Aiming at the problems caused by lane lines, an image segmentation algorithm based on adaptive threshold and connected domain is proposed. First, by analyzing features like grey level distribution and the illumination of the images, the algorithm uses Hough transform to divide the images into different sections and convert them into binary images separately. It then uses the connected domain theory to amend the outcome of segmentation, remove noises and fill the interior zone of lane lines. Experiments have proved that this method could eliminate the influence of illumination and lane line abrasion, removing noises thoroughly while maintaining high segmentation precision.

  16. Characterization of SNPs in the dopamine-β-hydroxylase gene providing new insights into its structure-function relationship.

    PubMed

    Punchaichira, Toyanji Joseph; Dey, Sanjay Kumar; Mukhopadhyay, Anirban; Kundu, Suman; Thelma, B K

    2017-07-01

    Dopamine-β-hydroxylase (DBH, EC 1.14.17.1), an oxido-reductase that catalyses the conversion of dopamine to norepinephrine, is largely expressed in sympathetic neurons and adrenal medulla. Several regulatory and structural variants in DBH associated with various neuropsychiatric, cardiovascular diseases and a few that may determine enzyme activity have also been identified. Due to paucity of studies on functional characterization of DBH variants, its structure-function relationship is poorly understood. The purpose of the study was to characterize five non-synonymous (ns) variants that were prioritized either based on previous association studies or Sorting Tolerant From Intolerant (SIFT) algorithm. The DBH ORF with wild type (WT) and site-directed mutagenized variants were transfected into HEK293 cells to generate transient and stable lines expressing these variant enzymes. Activity was determined by UPLC-PDA and corresponding quantity by MRM HR on a TripleTOF 5600 MS respectively of spent media from stable cell lines. Homospecific activity computed for the WT and variant proteins showed a marginal decrease in A318S, W544S and R549C variants. In transient cell lines, differential secretion was observed in the case of L317P, W544S and R549C. Secretory defect in L317P was confirmed by localization in ER. R549C exhibited both decreased homospecific activity and differential secretion. Of note, all the variants were seen to be destabilizing based on in silico folding analysis and molecular dynamics (MD) simulation, lending support to our experimental observations. These novel genotype-phenotype correlations in this gene of considerable pharmacological relevance have implications for dopamine-related disorders.

  17. Sensitivity study of voxel-based PET image comparison to image registration algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross

    2014-11-01

    Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to themore » respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor subvolume with R{sub rigid} = − 0.60 (p = 0.01) and R{sub deformable} = − 0.46 (p = 0.01–0.20) averaging over all deformable algorithms. For stable-disease subvolumes, the correlations were significant (p < 0.001) for all registration algorithms with R{sub rigid} = − 0.71 and R{sub deformable} = − 0.72. Progressor and stable-disease subvolumes resulting from rigid registration were in excellent agreement (DSI > 70%) for RMS{sub rigid} < 150 HU. However, tumor deformation was observed to have negligible effect on DSI for responder subvolumes with insignificant |R| < 0.26, p > 0.27. Conclusions: This study demonstrated that deformable algorithms cannot be arbitrarily chosen; different deformable algorithms can result in large differences of voxel-based PET image comparison. For low tumor deformation (RMS{sub rigid} < 150 HU), rigid and deformable algorithms yield similar results, suggesting deformable registration is not required for these cases.« less

  18. Accurate computation and continuation of homoclinic and heteroclinic orbits for singular perturbation problems

    NASA Technical Reports Server (NTRS)

    Vaughan, William W.; Friedman, Mark J.; Monteiro, Anand C.

    1993-01-01

    In earlier papers, Doedel and the authors have developed a numerical method and derived error estimates for the computation of branches of heteroclinic orbits for a system of autonomous ordinary differential equations in R(exp n). The idea of the method is to reduce a boundary value problem on the real line to a boundary value problem on a finite interval by using a local (linear or higher order) approximation of the stable and unstable manifolds. A practical limitation for the computation of homoclinic and heteroclinic orbits has been the difficulty in obtaining starting orbits. Typically these were obtained from a closed form solution or via a homotopy from a known solution. Here we consider extensions of our algorithm which allow us to obtain starting orbits on the continuation branch in a more systematic way as well as make the continuation algorithm more flexible. In applications, we use the continuation software package AUTO in combination with some initial value software. The examples considered include computation of homoclinic orbits in a singular perturbation problem and in a turbulent fluid boundary layer in the wall region problem.

  19. Neural network method for lossless two-conductor transmission line equations based on the IELM algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yunlei; Hou, Muzhou; Luo, Jianshu; Liu, Taohua

    2018-06-01

    With the increasing demands for vast amounts of data and high-speed signal transmission, the use of multi-conductor transmission lines is becoming more common. The impact of transmission lines on signal transmission is thus a key issue affecting the performance of high-speed digital systems. To solve the problem of lossless two-conductor transmission line equations (LTTLEs), a neural network model and algorithm are explored in this paper. By selecting the product of two triangular basis functions as the activation function of hidden layer neurons, we can guarantee the separation of time, space, and phase orthogonality. By adding the initial condition to the neural network, an improved extreme learning machine (IELM) algorithm for solving the network weight is obtained. This is different to the traditional method for converting the initial condition into the iterative constraint condition. Calculation software for solving the LTTLEs based on the IELM algorithm is developed. Numerical experiments show that the results are consistent with those of the traditional method. The proposed neural network algorithm can find the terminal voltage of the transmission line and also the voltage of any observation point. It is possible to calculate the value at any given point by using the neural network model to solve the transmission line equation.

  20. Generation Algorithm of Discrete Line in Multi-Dimensional Grids

    NASA Astrophysics Data System (ADS)

    Du, L.; Ben, J.; Li, Y.; Wang, R.

    2017-09-01

    Discrete Global Grids System (DGGS) is a kind of digital multi-resolution earth reference model, in terms of structure, it is conducive to the geographical spatial big data integration and mining. Vector is one of the important types of spatial data, only by discretization, can it be applied in grids system to make process and analysis. Based on the some constraint conditions, this paper put forward a strict definition of discrete lines, building a mathematic model of the discrete lines by base vectors combination method. Transforming mesh discrete lines issue in n-dimensional grids into the issue of optimal deviated path in n-minus-one dimension using hyperplane, which, therefore realizing dimension reduction process in the expression of mesh discrete lines. On this basis, we designed a simple and efficient algorithm for dimension reduction and generation of the discrete lines. The experimental results show that our algorithm not only can be applied in the two-dimensional rectangular grid, also can be applied in the two-dimensional hexagonal grid and the three-dimensional cubic grid. Meanwhile, when our algorithm is applied in two-dimensional rectangular grid, it can get a discrete line which is more similar to the line in the Euclidean space.

  1. Applications of fractional lower order S transform time frequency filtering algorithm to machine fault diagnosis

    PubMed Central

    Wang, Haibin; Zha, Daifeng; Li, Peng; Xie, Huicheng; Mao, Lili

    2017-01-01

    Stockwell transform(ST) time-frequency representation(ST-TFR) is a time frequency analysis method which combines short time Fourier transform with wavelet transform, and ST time frequency filtering(ST-TFF) method which takes advantage of time-frequency localized spectra can separate the signals from Gaussian noise. The ST-TFR and ST-TFF methods are used to analyze the fault signals, which is reasonable and effective in general Gaussian noise cases. However, it is proved that the mechanical bearing fault signal belongs to Alpha(α) stable distribution process(1 < α < 2) in this paper, even the noise also is α stable distribution in some special cases. The performance of ST-TFR method will degrade under α stable distribution noise environment, following the ST-TFF method fail. Hence, a new fractional lower order ST time frequency representation(FLOST-TFR) method employing fractional lower order moment and ST and inverse FLOST(IFLOST) are proposed in this paper. A new FLOST time frequency filtering(FLOST-TFF) algorithm based on FLOST-TFR method and IFLOST is also proposed, whose simplified method is presented in this paper. The discrete implementation of FLOST-TFF algorithm is deduced, and relevant steps are summarized. Simulation results demonstrate that FLOST-TFR algorithm is obviously better than the existing ST-TFR algorithm under α stable distribution noise, which can work better under Gaussian noise environment, and is robust. The FLOST-TFF method can effectively filter out α stable distribution noise, and restore the original signal. The performance of FLOST-TFF algorithm is better than the ST-TFF method, employing which mixed MSEs are smaller when α and generalized signal noise ratio(GSNR) change. Finally, the FLOST-TFR and FLOST-TFF methods are applied to analyze the outer race fault signal and extract their fault features under α stable distribution noise, where excellent performances can be shown. PMID:28406916

  2. Directional Agglomeration Multigrid Techniques for High Reynolds Number Viscous Flow Solvers

    NASA Technical Reports Server (NTRS)

    1998-01-01

    A preconditioned directional-implicit agglomeration algorithm is developed for solving two- and three-dimensional viscous flows on highly anisotropic unstructured meshes of mixed-element types. The multigrid smoother consists of a pre-conditioned point- or line-implicit solver which operates on lines constructed in the unstructured mesh using a weighted graph algorithm. Directional coarsening or agglomeration is achieved using a similar weighted graph algorithm. A tight coupling of the line construction and directional agglomeration algorithms enables the use of aggressive coarsening ratios in the multigrid algorithm, which in turn reduces the cost of a multigrid cycle. Convergence rates which are independent of the degree of grid stretching are demonstrated in both two and three dimensions. Further improvement of the three-dimensional convergence rates through a GMRES technique is also demonstrated.

  3. Directional Agglomeration Multigrid Techniques for High-Reynolds Number Viscous Flows

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1998-01-01

    A preconditioned directional-implicit agglomeration algorithm is developed for solving two- and three-dimensional viscous flows on highly anisotropic unstructured meshes of mixed-element types. The multigrid smoother consists of a pre-conditioned point- or line-implicit solver which operates on lines constructed in the unstructured mesh using a weighted graph algorithm. Directional coarsening or agglomeration is achieved using a similar weighted graph algorithm. A tight coupling of the line construction and directional agglomeration algorithms enables the use of aggressive coarsening ratios in the multigrid algorithm, which in turn reduces the cost of a multigrid cycle. Convergence rates which are independent of the degree of grid stretching are demonstrated in both two and three dimensions. Further improvement of the three-dimensional convergence rates through a GMRES technique is also demonstrated.

  4. Study of Double-Weighted Graph Model and Optimal Path Planning for Tourist Scenic Area Oriented Intelligent Tour Guide

    NASA Astrophysics Data System (ADS)

    Shi, Y.; Long, Y.; Wi, X. L.

    2014-04-01

    When tourists visiting multiple tourist scenic spots, the travel line is usually the most effective road network according to the actual tour process, and maybe the travel line is different from planned travel line. For in the field of navigation, a proposed travel line is normally generated automatically by path planning algorithm, considering the scenic spots' positions and road networks. But when a scenic spot have a certain area and have multiple entrances or exits, the traditional described mechanism of single point coordinates is difficult to reflect these own structural features. In order to solve this problem, this paper focuses on the influence on the process of path planning caused by scenic spots' own structural features such as multiple entrances or exits, and then proposes a doubleweighted Graph Model, for the weight of both vertexes and edges of proposed Model can be selected dynamically. And then discusses the model building method, and the optimal path planning algorithm based on Dijkstra algorithm and Prim algorithm. Experimental results show that the optimal planned travel line derived from the proposed model and algorithm is more reasonable, and the travelling order and distance would be further optimized.

  5. Solving radiative transfer with line overlaps using Gauss-Seidel algorithms

    NASA Astrophysics Data System (ADS)

    Daniel, F.; Cernicharo, J.

    2008-09-01

    Context: The improvement in observational facilities requires refining the modelling of the geometrical structures of astrophysical objects. Nevertheless, for complex problems such as line overlap in molecules showing hyperfine structure, a detailed analysis still requires a large amount of computing time and thus, misinterpretation cannot be dismissed due to an undersampling of the whole space of parameters. Aims: We extend the discussion of the implementation of the Gauss-Seidel algorithm in spherical geometry and include the case of hyperfine line overlap. Methods: We first review the basics of the short characteristics method that is used to solve the radiative transfer equations. Details are given on the determination of the Lambda operator in spherical geometry. The Gauss-Seidel algorithm is then described and, by analogy to the plan-parallel case, we see how to introduce it in spherical geometry. Doing so requires some approximations in order to keep the algorithm competitive. Finally, line overlap effects are included. Results: The convergence speed of the algorithm is compared to the usual Jacobi iterative schemes. The gain in the number of iterations is typically factors of 2 and 4 for the two implementations made of the Gauss-Seidel algorithm. This is obtained despite the introduction of approximations in the algorithm. A comparison of results obtained with and without line overlaps for N2H^+, HCN, and HNC shows that the J=3-2 line intensities are significantly underestimated in models where line overlap is neglected.

  6. A hybrid algorithm for the segmentation of books in libraries

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Lei, Liang

    2016-05-01

    This paper proposes an algorithm for book segmentation based on bookshelves images. The algorithm can be separated into three parts. The first part is pre-processing, aiming at eliminating or decreasing the effect of image noise and illumination conditions. The second part is near-horizontal line detection based on Canny edge detector, and separating a bookshelves image into multiple sub-images so that each sub-image contains an individual shelf. The last part is book segmentation. In each shelf image, near-vertical line is detected, and obtained lines are used for book segmentation. The proposed algorithm was tested with the bookshelf images taken from OPIE library in MTU, and the experimental results demonstrate good performance.

  7. Measuring the self-similarity exponent in Lévy stable processes of financial time series

    NASA Astrophysics Data System (ADS)

    Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.

    2013-11-01

    Geometric method-based procedures, which will be called GM algorithms herein, were introduced in [M.A. Sánchez Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551], to efficiently calculate the self-similarity exponent of a time series. In that paper, the authors showed empirically that these algorithms, based on a geometrical approach, are more accurate than the classical algorithms, especially with short length time series. The authors checked that GM algorithms are good when working with (fractional) Brownian motions. Moreover, in [J.E. Trinidad Segovia, M. Fernández-Martínez, M.A. Sánchez-Granero, A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A 391 (2012) 2209-2214], a mathematical background for the validity of such procedures to estimate the self-similarity index of any random process with stationary and self-affine increments was provided. In particular, they proved theoretically that GM algorithms are also valid to explore long-memory in (fractional) Lévy stable motions. In this paper, we prove empirically by Monte Carlo simulation that GM algorithms are able to calculate accurately the self-similarity index in Lévy stable motions and find empirical evidence that they are more precise than the absolute value exponent (denoted by AVE onwards) and the multifractal detrended fluctuation analysis (MF-DFA) algorithms, especially with a short length time series. We also compare them with the generalized Hurst exponent (GHE) algorithm and conclude that both GM2 and GHE algorithms are the most accurate to study financial series. In addition to that, we provide empirical evidence, based on the accuracy of GM algorithms to estimate the self-similarity index in Lévy motions, that the evolution of the stocks of some international market indices, such as U.S. Small Cap and Nasdaq100, cannot be modelized by means of a Brownian motion.

  8. Methodology for the Evaluation of the Algorithms for Text Line Segmentation Based on Extended Binary Classification

    NASA Astrophysics Data System (ADS)

    Brodic, D.

    2011-01-01

    Text line segmentation represents the key element in the optical character recognition process. Hence, testing of text line segmentation algorithms has substantial relevance. All previously proposed testing methods deal mainly with text database as a template. They are used for testing as well as for the evaluation of the text segmentation algorithm. In this manuscript, methodology for the evaluation of the algorithm for text segmentation based on extended binary classification is proposed. It is established on the various multiline text samples linked with text segmentation. Their results are distributed according to binary classification. Final result is obtained by comparative analysis of cross linked data. At the end, its suitability for different types of scripts represents its main advantage.

  9. AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.

    PubMed

    Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou

    2017-01-01

    In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.

  10. Reference Gene Validation for RT-qPCR, a Note on Different Available Software Packages

    PubMed Central

    De Spiegelaere, Ward; Dern-Wieloch, Jutta; Weigel, Roswitha; Schumacher, Valérie; Schorle, Hubert; Nettersheim, Daniel; Bergmann, Martin; Brehm, Ralph; Kliesch, Sabine; Vandekerckhove, Linos; Fink, Cornelia

    2015-01-01

    Background An appropriate normalization strategy is crucial for data analysis from real time reverse transcription polymerase chain reactions (RT-qPCR). It is widely supported to identify and validate stable reference genes, since no single biological gene is stably expressed between cell types or within cells under different conditions. Different algorithms exist to validate optimal reference genes for normalization. Applying human cells, we here compare the three main methods to the online available RefFinder tool that integrates these algorithms along with R-based software packages which include the NormFinder and GeNorm algorithms. Results 14 candidate reference genes were assessed by RT-qPCR in two sample sets, i.e. a set of samples of human testicular tissue containing carcinoma in situ (CIS), and a set of samples from the human adult Sertoli cell line (FS1) either cultured alone or in co-culture with the seminoma like cell line (TCam-2) or with equine bone marrow derived mesenchymal stem cells (eBM-MSC). Expression stabilities of the reference genes were evaluated using geNorm, NormFinder, and BestKeeper. Similar results were obtained by the three approaches for the most and least stably expressed genes. The R-based packages NormqPCR, SLqPCR and the NormFinder for R script gave identical gene rankings. Interestingly, different outputs were obtained between the original software packages and the RefFinder tool, which is based on raw Cq values for input. When the raw data were reanalysed assuming 100% efficiency for all genes, then the outputs of the original software packages were similar to the RefFinder software, indicating that RefFinder outputs may be biased because PCR efficiencies are not taken into account. Conclusions This report shows that assay efficiency is an important parameter for reference gene validation. New software tools that incorporate these algorithms should be carefully validated prior to use. PMID:25825906

  11. Reference gene validation for RT-qPCR, a note on different available software packages.

    PubMed

    De Spiegelaere, Ward; Dern-Wieloch, Jutta; Weigel, Roswitha; Schumacher, Valérie; Schorle, Hubert; Nettersheim, Daniel; Bergmann, Martin; Brehm, Ralph; Kliesch, Sabine; Vandekerckhove, Linos; Fink, Cornelia

    2015-01-01

    An appropriate normalization strategy is crucial for data analysis from real time reverse transcription polymerase chain reactions (RT-qPCR). It is widely supported to identify and validate stable reference genes, since no single biological gene is stably expressed between cell types or within cells under different conditions. Different algorithms exist to validate optimal reference genes for normalization. Applying human cells, we here compare the three main methods to the online available RefFinder tool that integrates these algorithms along with R-based software packages which include the NormFinder and GeNorm algorithms. 14 candidate reference genes were assessed by RT-qPCR in two sample sets, i.e. a set of samples of human testicular tissue containing carcinoma in situ (CIS), and a set of samples from the human adult Sertoli cell line (FS1) either cultured alone or in co-culture with the seminoma like cell line (TCam-2) or with equine bone marrow derived mesenchymal stem cells (eBM-MSC). Expression stabilities of the reference genes were evaluated using geNorm, NormFinder, and BestKeeper. Similar results were obtained by the three approaches for the most and least stably expressed genes. The R-based packages NormqPCR, SLqPCR and the NormFinder for R script gave identical gene rankings. Interestingly, different outputs were obtained between the original software packages and the RefFinder tool, which is based on raw Cq values for input. When the raw data were reanalysed assuming 100% efficiency for all genes, then the outputs of the original software packages were similar to the RefFinder software, indicating that RefFinder outputs may be biased because PCR efficiencies are not taken into account. This report shows that assay efficiency is an important parameter for reference gene validation. New software tools that incorporate these algorithms should be carefully validated prior to use.

  12. Efficient and Stable Routing Algorithm Based on User Mobility and Node Density in Urban Vehicular Network

    PubMed Central

    Al-Mayouf, Yusor Rafid Bahar; Ismail, Mahamod; Abdullah, Nor Fadzilah; Wahab, Ainuddin Wahid Abdul; Mahdi, Omar Adil; Khan, Suleman; Choo, Kim-Kwang Raymond

    2016-01-01

    Vehicular ad hoc networks (VANETs) are considered an emerging technology in the industrial and educational fields. This technology is essential in the deployment of the intelligent transportation system, which is targeted to improve safety and efficiency of traffic. The implementation of VANETs can be effectively executed by transmitting data among vehicles with the use of multiple hops. However, the intrinsic characteristics of VANETs, such as its dynamic network topology and intermittent connectivity, limit data delivery. One particular challenge of this network is the possibility that the contributing node may only remain in the network for a limited time. Hence, to prevent data loss from that node, the information must reach the destination node via multi-hop routing techniques. An appropriate, efficient, and stable routing algorithm must be developed for various VANET applications to address the issues of dynamic topology and intermittent connectivity. Therefore, this paper proposes a novel routing algorithm called efficient and stable routing algorithm based on user mobility and node density (ESRA-MD). The proposed algorithm can adapt to significant changes that may occur in the urban vehicular environment. This algorithm works by selecting an optimal route on the basis of hop count and link duration for delivering data from source to destination, thereby satisfying various quality of service considerations. The validity of the proposed algorithm is investigated by its comparison with ARP-QD protocol, which works on the mechanism of optimal route finding in VANETs in urban environments. Simulation results reveal that the proposed ESRA-MD algorithm shows remarkable improvement in terms of delivery ratio, delivery delay, and communication overhead. PMID:27855165

  13. Automated Robot Movement in the Mapped Area Using Fuzzy Logic for Wheel Chair Application

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Efendi, S.; Ramadhana, H.; Andayani, U.; Fahmi, F.

    2018-03-01

    The difficulties of the disabled to move make them unable to live independently. People with disabilities need supporting device to move from place to place. For that, we proposed a solution that can help people with disabilities to move from one room to another automatically. This study aims to create a wheelchair prototype in the form of a wheeled robot as a means to learn the automatic mobilization. The fuzzy logic algorithm was used to determine motion direction based on initial position, ultrasonic sensors reading in avoiding obstacles, infrared sensors reading as a black line reader for the wheeled robot to move smooth and smartphone as a mobile controller. As a result, smartphones with the Android operating system can control the robot using Bluetooth. Here Bluetooth technology can be used to control the robot from a maximum distance of 15 meters. The proposed algorithm was able to work stable for automatic motion determination based on initial position, and also able to modernize the wheelchair movement from one room to another automatically.

  14. HECLIB. Volume 2: HECDSS Subroutines Programmer’s Manual

    DTIC Science & Technology

    1991-05-01

    algorithm and hierarchical design for database accesses. This algorithm provides quick access to data sets and an efficient means of adding new data set...Description of How DSS Works DSS version 6 utilizes a modified hash algorithm based upon the pathname to store and retrieve data. This structure allows...balancing disk space and record access times. A variation in this algorithm is for "stable" files. In a stable file, a hash table is not utilized

  15. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    NASA Astrophysics Data System (ADS)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  16. AUTONOMOUS GAUSSIAN DECOMPOSITION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocitymore » width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.« less

  17. Dynamic electrical impedance imaging with the interacting multiple model scheme.

    PubMed

    Kim, Kyung Youn; Kim, Bong Seok; Kim, Min Chan; Kim, Sin; Isaacson, David; Newell, Jonathan C

    2005-04-01

    In this paper, an effective dynamical EIT imaging scheme is presented for on-line monitoring of the abruptly changing resistivity distribution inside the object, based on the interacting multiple model (IMM) algorithm. The inverse problem is treated as a stochastic nonlinear state estimation problem with the time-varying resistivity (state) being estimated on-line with the aid of the IMM algorithm. In the design of the IMM algorithm multiple models with different process noise covariance are incorporated to reduce the modeling uncertainty. Simulations and phantom experiments are provided to illustrate the proposed algorithm.

  18. Fast direct fourier reconstruction of radial and PROPELLER MRI data using the chirp transform algorithm on graphics hardware.

    PubMed

    Feng, Yanqiu; Song, Yanli; Wang, Cong; Xin, Xuegang; Feng, Qianjin; Chen, Wufan

    2013-10-01

    To develop and test a new algorithm for fast direct Fourier transform (DrFT) reconstruction of MR data on non-Cartesian trajectories composed of lines with equally spaced points. The DrFT, which is normally used as a reference in evaluating the accuracy of other reconstruction methods, can reconstruct images directly from non-Cartesian MR data without interpolation. However, DrFT reconstruction involves substantially intensive computation, which makes the DrFT impractical for clinical routine applications. In this article, the Chirp transform algorithm was introduced to accelerate the DrFT reconstruction of radial and Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) MRI data located on the trajectories that are composed of lines with equally spaced points. The performance of the proposed Chirp transform algorithm-DrFT algorithm was evaluated by using simulation and in vivo MRI data. After implementing the algorithm on a graphics processing unit, the proposed Chirp transform algorithm-DrFT algorithm achieved an acceleration of approximately one order of magnitude, and the speed-up factor was further increased to approximately three orders of magnitude compared with the traditional single-thread DrFT reconstruction. Implementation the Chirp transform algorithm-DrFT algorithm on the graphics processing unit can efficiently calculate the DrFT reconstruction of the radial and PROPELLER MRI data. Copyright © 2012 Wiley Periodicals, Inc.

  19. Automated detection of jet contrails using the AVHRR split window

    NASA Technical Reports Server (NTRS)

    Engelstad, M.; Sengupta, S. K.; Lee, T.; Welch, R. M.

    1992-01-01

    This paper investigates the automated detection of jet contrails using data from the Advanced Very High Resolution Radiometer. A preliminary algorithm subtracts the 11.8-micron image from the 10.8-micron image, creating a difference image on which contrails are enhanced. Then a three-stage algorithm searches the difference image for the nearly-straight line segments which characterize contrails. First, the algorithm searches for elevated, linear patterns called 'ridges'. Second, it applies a Hough transform to the detected ridges to locate nearly-straight lines. Third, the algorithm determines which of the nearly-straight lines are likely to be contrails. The paper applies this technique to several test scenes.

  20. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  1. A Novel Wide-Area Backup Protection Based on Fault Component Current Distribution and Improved Evidence Theory

    PubMed Central

    Zhang, Zhe; Kong, Xiangping; Yin, Xianggen; Yang, Zengli; Wang, Lijun

    2014-01-01

    In order to solve the problems of the existing wide-area backup protection (WABP) algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S) evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance. PMID:25050399

  2. Handwritten text line segmentation by spectral clustering

    NASA Astrophysics Data System (ADS)

    Han, Xuecheng; Yao, Hui; Zhong, Guoqiang

    2017-02-01

    Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.

  3. Generating mammalian stable cell lines by electroporation.

    PubMed

    A Longo, Patti; Kavran, Jennifer M; Kim, Min-Sung; Leahy, Daniel J

    2013-01-01

    Expression of functional, recombinant mammalian proteins often requires expression in mammalian cells (see Single Cell Cloning of a Stable Mammalian Cell Line). If the expressed protein needs to be made frequently, it can be best to generate a stable cell line instead of performing repeated transient transfections into mammalian cells. Here, we describe a method to generate stable cell lines via electroporation followed by selection steps. This protocol will be limited to the CHO dhfr-Urlaub et al. (1983) and LEC1 cell lines, which in our experience perform the best with this method. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  5. A modified three-term PRP conjugate gradient algorithm for optimization models.

    PubMed

    Wu, Yanlin

    2017-01-01

    The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.

  6. A fast hidden line algorithm for plotting finite element models

    NASA Technical Reports Server (NTRS)

    Jones, G. K.

    1982-01-01

    Effective plotting of finite element models requires the use of fast hidden line plot techniques that provide interactive response. A high speed hidden line technique was developed to facilitate the plotting of NASTRAN finite element models. Based on testing using 14 different models, the new hidden line algorithm (JONES-D) appears to be very fast: its speed equals that for normal (all lines visible) plotting and when compared to other existing methods it appears to be substantially faster. It also appears to be very reliable: no plot errors were observed using the new method to plot NASTRAN models. The new algorithm was made part of the NPLOT NASTRAN plot package and was used by structural analysts for normal production tasks.

  7. Evaluation of HIV testing algorithms in Ethiopia: the role of the tie-breaker algorithm and weakly reacting test lines in contributing to a high rate of false positive HIV diagnoses.

    PubMed

    Shanks, Leslie; Siddiqui, M Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Pirou, Erwan; Ritmeijer, Koert; Masiga, Johnson; Abebe, Almaz

    2015-02-03

    In Ethiopia a tiebreaker algorithm using 3 rapid diagnostic tests (RDTs) in series is used to diagnose HIV. Discordant results between the first 2 RDTs are resolved by a third 'tiebreaker' RDT. Médecins Sans Frontières uses an alternate serial algorithm of 2 RDTs followed by a confirmation test for all double positive RDT results. The primary objective was to compare the performance of the tiebreaker algorithm with a serial algorithm, and to evaluate the addition of a confirmation test to both algorithms. A secondary objective looked at the positive predictive value (PPV) of weakly reactive test lines. The study was conducted in two HIV testing sites in Ethiopia. Study participants were recruited sequentially until 200 positive samples were reached. Each sample was re-tested in the laboratory on the 3 RDTs and on a simple to use confirmation test, the Orgenics Immunocomb Combfirm® (OIC). The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. 2620 subjects were included with a HIV prevalence of 7.7%. Each of the 3 RDTs had an individual specificity of at least 99%. The serial algorithm with 2 RDTs had a single false positive result (1 out of 204) to give a PPV of 99.5% (95% CI 97.3%-100%). The tiebreaker algorithm resulted in 16 false positive results (PPV 92.7%, 95% CI: 88.4%-95.8%). Adding the OIC confirmation test to either algorithm eliminated the false positives. All the false positives had at least one weakly reactive test line in the algorithm. The PPV of weakly reacting RDTs was significantly lower than those with strongly positive test lines. The risk of false positive HIV diagnosis in a tiebreaker algorithm is significant. We recommend abandoning the tie-breaker algorithm in favour of WHO recommended serial or parallel algorithms, interpreting weakly reactive test lines as indeterminate results requiring further testing except in the setting of blood transfusion, and most importantly, adding a confirmation test to the RDT algorithm. It is now time to focus research efforts on how best to translate this knowledge into practice at the field level. Clinical Trial registration #: NCT01716299.

  8. An index-based algorithm for fast on-line query processing of latent semantic analysis

    PubMed Central

    Li, Pohan; Wang, Wei

    2017-01-01

    Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm. PMID:28520747

  9. An index-based algorithm for fast on-line query processing of latent semantic analysis.

    PubMed

    Zhang, Mingxi; Li, Pohan; Wang, Wei

    2017-01-01

    Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm.

  10. Genetic Algorithm for Multiple Bus Line Coordination on Urban Arterial

    PubMed Central

    Yang, Zhen; Wang, Wei; Chen, Shuyan; Ding, Haoyang; Li, Xiaowei

    2015-01-01

    Bus travel time on road section is defined and analyzed with the effect of multiple bus lines. An analytical model is formulated to calculate the total red time a bus encounters when travelling along the arterial. Genetic algorithm is used to optimize the offset scheme of traffic signals to minimize the total red time that all bus lines encounter in two directions of the arterial. The model and algorithm are applied to the major part of Zhongshan North Street in the city of Nanjing. The results show that the methods in this paper can reduce total red time of all the bus lines by 31.9% on the object arterial and thus improve the traffic efficiency of the whole arterial and promote public transport priority. PMID:25663837

  11. A numerically-stable algorithm for calibrating single six-ports for national microwave reflectometry

    NASA Astrophysics Data System (ADS)

    Hodgetts, T. E.

    1990-11-01

    A full description and analysis of the numerically stable algorithm currently used for calibrating single six ports or multi states for national microwave reflectometry, employing as standards four one port devices having known voltage reflection coefficients, is given.

  12. A Minimum Path Algorithm Among 3D-Polyhedral Objects

    NASA Astrophysics Data System (ADS)

    Yeltekin, Aysin

    1989-03-01

    In this work we introduce a minimum path theorem for 3D case. We also develop an algorithm based on the theorem we prove. The algorithm will be implemented on the software package we develop using C language. The theorem we introduce states that; "Given the initial point I, final point F and S be the set of finite number of static obstacles then an optimal path P from I to F, such that PA S = 0 is composed of straight line segments which are perpendicular to the edge segments of the objects." We prove the theorem as well as we develop the following algorithm depending on the theorem to find the minimum path among 3D-polyhedral objects. The algorithm generates the point Qi on edge ei such that at Qi one can find the line which is perpendicular to the edge and the IF line. The algorithm iteratively provides a new set of initial points from Qi and exploits all possible paths. Then the algorithm chooses the minimum path among the possible ones. The flowchart of the program as well as the examination of its numerical properties are included.

  13. Parallel Monte Carlo Search for Hough Transform

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.

    2017-10-01

    We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.

  14. Tele-Autonomous control involving contact. Final Report Thesis; [object localization

    NASA Technical Reports Server (NTRS)

    Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.

    1990-01-01

    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.

  15. A stable high-order perturbation of surfaces method for numerical simulation of diffraction problems in triply layered media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu

    The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution ofmore » dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.« less

  16. Establishment of stable cell line for inducing KAP1 protein expression.

    PubMed

    Liu, Xiaoyan; Khan, Md Asaduzzaman; Cheng, Jingliang; Wei, Chunli; Zhang, Lianmei; Fu, Junjiang

    2015-06-01

    Generation of the stable cell lines is a highly efficient tool in functional studies of certain genes or proteins, where the particular genes or proteins are inducibly expressed. The KRAB-associated protein-1 (KAP1) is an important transcription regulatory protein, which is investigated in several molecular biological studies. In this study, we have aimed to generate a stable cell line for inducing KAP1 expression. The recombinant plasmid pcDNA5/FRT/TO-KAP1 was constructed at first, which was then transfected into Flp-In™T-REx™-HEK293 cells to establish an inducible pcDNA5/FRT/TO-KAP1-HEK293 cell line. The Western blot analysis showed that the protein level of KAP1 is over-expressed in the established stable cell line by doxycycline induction, both dose and time dependently. Thus we have successfully established stable pcDNA5/FRT/TO-KAP1-HEK293 cell line, which can express KAP1 inducibly. This inducible cell line might be very useful for KAP1 functional studies.

  17. Increasing signal processing sophistication in the calculation of the respiratory modulation of the photoplethysmogram (DPOP).

    PubMed

    Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D

    2015-06-01

    DPOP (∆POP or Delta-POP) is a non-invasive parameter which measures the strength of respiratory modulations present in the pulse oximetry photoplethysmogram (pleth) waveform. It has been proposed as a non-invasive surrogate parameter for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. Many groups have reported on the DPOP parameter and its correlation with PPV using various semi-automated algorithmic implementations. The study reported here demonstrates the performance gains made by adding increasingly sophisticated signal processing components to a fully automated DPOP algorithm. A DPOP algorithm was coded and its performance systematically enhanced through a series of code module alterations and additions. Each algorithm iteration was tested on data from 20 mechanically ventilated OR patients. Correlation coefficients and ROC curve statistics were computed at each stage. For the purposes of the analysis we split the data into a manually selected 'stable' region subset of the data containing relatively noise free segments and a 'global' set incorporating the whole data record. Performance gains were measured in terms of correlation against PPV measurements in OR patients undergoing controlled mechanical ventilation. Through increasingly advanced pre-processing and post-processing enhancements to the algorithm, the correlation coefficient between DPOP and PPV improved from a baseline value of R = 0.347 to R = 0.852 for the stable data set, and, correspondingly, R = 0.225 to R = 0.728 for the more challenging global data set. Marked gains in algorithm performance are achievable for manually selected stable regions of the signals using relatively simple algorithm enhancements. Significant additional algorithm enhancements, including a correction for low perfusion values, were required before similar gains were realised for the more challenging global data set.

  18. A novel orthoimage mosaic method using the weighted A* algorithm for UAV imagery

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhou, Shunping; Xiong, Xiaodong; Zhu, Junfeng

    2017-12-01

    A weighted A* algorithm is proposed to select optimal seam-lines in orthoimage mosaic for UAV (Unmanned Aircraft Vehicle) imagery. The whole workflow includes four steps: the initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is then detected based on DSM (Digital Surface Model) data; the vertices (conjunction nodes) of initial network are relocated since some of them are on the high objects (buildings, trees and other artificial structures); and, the initial seam-lines are finally refined using the weighted A* algorithm based on the edge diagram and the relocated vertices. The method was tested with two real UAV datasets. Preliminary results show that the proposed method produces acceptable mosaic images in both the urban and mountainous areas, and is better than the result of the state-of-the-art methods on the datasets.

  19. Differential effects on apoptosis induction in hepatocyte lines by stable expression of hepatitis B virus X protein

    PubMed Central

    Fiedler, Nicola; Quant, Ellen; Fink, Ludger; Sun, Jianguang; Schuster, Ralph; Gerlich, Wolfram H; Schaefer, Stephan

    2006-01-01

    AIM: Hepatitis B virus protein X (HBx) has been shown to be weakly oncogenic in vitro. The transforming activities of HBx have been linked with the inhibition of several functions of the tumor suppressor p53. We have studied whether HBx may have different effects on p53 depending on the cell type. METHODS: We used the human hepatoma cell line HepG2 and the immortalized murine hepatocyte line AML12 and analyzed stably transfected clones which expressed physiological amounts of HBx. P53 was induced by UV irradiation. RESULTS: The p53 induction by UV irradiation was unaffected by stable expression of HBx. However, the expression of the cyclin kinase inhibitor p21waf/cip/sdi which gets activated by p53 was affected in the HBx transformed cell line AML12-HBx9, but not in HepG2. In AML-HBx9 cells, p21waf/cip/sdi-protein expression and p21waf/cip/sdi transcription were deregulated. Furthermore, the process of apoptosis was affected in opposite ways in the two cell lines investigated. While stable expression of HBx enhanced apoptosis induced by UV irradiation in HepG2-cells, apoptosis was decreased in HBx transformed AML12-HBx9. P53 repressed transcription from the HBV enhancer I, when expressed from expression vectors or after induction of endogenous p53 by UV irradiation. Repression by endogenous p53 was partially reversible by stably expressed HBx in both cell lines. CONCLUSION: Stable expression of HBx leads to deregulation of apoptosis induced by UV irradiation depending on the cell line used. In an immortalized hepatocyte line HBx acted anti-apoptotic whereas expression in a carcinoma derived hepatocyte line HBx enhanced apoptosis. PMID:16937438

  20. Stable orthogonal local discriminant embedding for linear dimensionality reduction.

    PubMed

    Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin

    2013-07-01

    Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.

  1. Acceleration and torque feedback for robotic control - Experimental results

    NASA Technical Reports Server (NTRS)

    Mclnroy, John E.; Saridis, George N.

    1990-01-01

    Gross motion control of robotic manipulators typically requires significant on-line computations to compensate for nonlinear dynamics due to gravity, Coriolis, centripetal, and friction nonlinearities. One controller proposed by Luo and Saridis avoids these computations by feeding back joint acceleration and torque. This study implements the controller on a Puma 600 robotic manipulator. Joint acceleration measurement is obtained by measuring linear accelerations of each joint, and deriving a computationally efficient transformation from the linear measurements to the angular accelerations. Torque feedback is obtained by using the previous torque sent to the joints. The implementation has stability problems on the Puma 600 due to the extremely high gains inherent in the feedback structure. Since these high gains excite frequency modes in the Puma 600, the algorithm is modified to decrease the gain inherent in the feedback structure. The resulting compensator is stable and insensitive to high frequency unmodeled dynamics. Moreover, a second compensator is proposed which uses acceleration and torque feedback, but still allows nonlinear terms to be fed forward. Thus, by feeding the increment in the easily calculated gravity terms forward, improved responses are obtained. Both proposed compensators are implemented, and the real time results are compared to those obtained with the computed torque algorithm.

  2. Recombinant protein production from stable mammalian cell lines and pools.

    PubMed

    Hacker, David L; Balasubramanian, Sowmya

    2016-06-01

    We highlight recent developments for the production of recombinant proteins from suspension-adapted mammalian cell lines. We discuss the generation of stable cell lines using transposons and lentivirus vectors (non-targeted transgene integration) and site-specific recombinases (targeted transgene integration). Each of these methods results in the generation of cell lines with protein yields that are generally superior to those achievable through classical plasmid transfection that depends on the integration of the transfected DNA by non-homologous DNA end-joining. This is the main reason why these techniques can also be used for the generation of stable cell pools, heterogenous populations of recombinant cells generated by gene delivery and genetic selection without resorting to single cell cloning. This allows the time line from gene transfer to protein production to be reduced. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Verification of Pharmacogenetics-Based Warfarin Dosing Algorithms in Han-Chinese Patients Undertaking Mechanic Heart Valve Replacement

    PubMed Central

    Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

    2014-01-01

    Objective To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. Methods We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. Results A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88–4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. Conclusions All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement. PMID:24728385

  4. Verification of pharmacogenetics-based warfarin dosing algorithms in Han-Chinese patients undertaking mechanic heart valve replacement.

    PubMed

    Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

    2014-01-01

    To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88-4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement.

  5. Lane detection based on color probability model and fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Yu, Yang; Jo, Kang-Hyun

    2018-04-01

    In the vehicle driver assistance systems, the accuracy and speed of lane line detection are the most important. This paper is based on color probability model and Fuzzy Local Information C-Means (FLICM) clustering algorithm. The Hough transform and the constraints of structural road are used to detect the lane line accurately. The global map of the lane line is drawn by the lane curve fitting equation. The experimental results show that the algorithm has good robustness.

  6. Band-pass filtering algorithms for adaptive control of compressor pre-stall modes in aircraft gas-turbine engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2018-05-01

    The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.

  7. FAST (Four chamber view And Swing Technique) Echo: a Novel and Simple Algorithm to Visualize Standard Fetal Echocardiographic Planes

    PubMed Central

    Yeo, Lami; Romero, Roberto; Jodicke, Cristiano; Oggè, Giovanna; Lee, Wesley; Kusanovic, Juan Pedro; Vaisbuch, Edi; Hassan, Sonia S.

    2010-01-01

    Objective To describe a novel and simple algorithm (FAST Echo: Four chamber view And Swing Technique) to visualize standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). Methods We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) “swings” through the ductal arch image (“swing technique”), providing an infinite number of cardiac planes in sequence. Each line generated the following plane(s): 1) Line 1: three-vessels and trachea view; 2) Line 2: five-chamber view and long axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); 3) Line 3: four-chamber view; and 4) “Swing” line: three-vessels and trachea view, five-chamber view and/or long axis view of the aorta, four-chamber view, and stomach. The algorithm was then tested in 50 normal hearts (15.3 – 40 weeks of gestation) and visualization rates for cardiac diagnostic planes were calculated. To determine if the algorithm could identify planes that departed from the normal images, we tested the algorithm in 5 cases with proven congenital heart defects. Results In normal cases, the FAST Echo algorithm (3 locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long axis view of the aorta, four-chamber view): 1) individually in 100% of cases [except for the three-vessel and trachea view, which was seen in 98% (49/50)]; and 2) simultaneously in 98% (49/50). The “swing technique” was able to generate the three-vessels and trachea view, five-chamber view and/or long axis view of the aorta, four-chamber view, and stomach in 100% of normal cases. In the abnormal cases, the FAST Echo algorithm demonstrated the cardiac defects and displayed views that deviated from what was expected from the examination of normal hearts. The “swing technique” was useful in demonstrating the specific diagnosis due to visualization of an infinite number of cardiac planes in sequence. Conclusions This novel and simple algorithm can be used to visualize standard fetal echocardiographic planes in normal fetal hearts. The FAST Echo algorithm may simplify examination of the fetal heart and could reduce operator dependency. Using this algorithm, the inability to obtain expected views or the appearance of abnormal views in the generated planes should raise the index of suspicion for congenital heart disease. PMID:20878671

  8. Four-chamber view and 'swing technique' (FAST) echo: a novel and simple algorithm to visualize standard fetal echocardiographic planes.

    PubMed

    Yeo, L; Romero, R; Jodicke, C; Oggè, G; Lee, W; Kusanovic, J P; Vaisbuch, E; Hassan, S

    2011-04-01

    To describe a novel and simple algorithm (four-chamber view and 'swing technique' (FAST) echo) for visualization of standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) 'swings' through the ductal arch image (swing technique), providing an infinite number of cardiac planes in sequence. Each line generates the following plane(s): (a) Line 1: three-vessels and trachea view; (b) Line 2: five-chamber view and long-axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); (c) Line 3: four-chamber view; and (d) 'swing line': three-vessels and trachea view, five-chamber view and/or long-axis view of the aorta, four-chamber view and stomach. The algorithm was then tested in 50 normal hearts in fetuses at 15.3-40 weeks' gestation and visualization rates for cardiac diagnostic planes were calculated. To determine whether the algorithm could identify planes that departed from the normal images, we tested the algorithm in five cases with proven congenital heart defects. In normal cases, the FAST echo algorithm (three locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long-axis view of the aorta, four-chamber view) individually in 100% of cases (except for the three-vessels and trachea view, which was seen in 98% (49/50)) and simultaneously in 98% (49/50). The swing technique was able to generate the three-vessels and trachea view, five-chamber view and/or long-axis view of the aorta, four-chamber view and stomach in 100% of normal cases. In the abnormal cases, the FAST echo algorithm demonstrated the cardiac defects and displayed views that deviated from what was expected from the examination of normal hearts. The swing technique was useful for demonstrating the specific diagnosis due to visualization of an infinite number of cardiac planes in sequence. This novel and simple algorithm can be used to visualize standard fetal echocardiographic planes in normal fetal hearts. The FAST echo algorithm may simplify examination of the fetal heart and could reduce operator dependency. Using this algorithm, inability to obtain expected views or the appearance of abnormal views in the generated planes should raise the index of suspicion for congenital heart disease. Copyright © 2011 ISUOG. Published by John Wiley & Sons, Ltd.

  9. Evaluation of the influence of dominance rules for the assembly line design problem under consideration of product design alternatives

    NASA Astrophysics Data System (ADS)

    Oesterle, Jonathan; Lionel, Amodeo

    2018-06-01

    The current competitive situation increases the importance of realistically estimating product costs during the early phases of product and assembly line planning projects. In this article, several multi-objective algorithms using difference dominance rules are proposed to solve the problem associated with the selection of the most effective combination of product and assembly lines. The list of developed algorithms includes variants of ant colony algorithms, evolutionary algorithms and imperialist competitive algorithms. The performance of each algorithm and dominance rule is analysed by five multi-objective quality indicators and fifty problem instances. The algorithms and dominance rules are ranked using a non-parametric statistical test.

  10. Calculation of the Respiratory Modulation of the Photoplethysmogram (DPOP) Incorporating a Correction for Low Perfusion

    PubMed Central

    Addison, Paul S.; Wang, Rui; McGonigle, Scott J.; Bergese, Sergio D.

    2014-01-01

    DPOP quantifies respiratory modulations in the photoplethysmogram. It has been proposed as a noninvasive surrogate for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. The correlation between DPOP and PPV may degrade due to low perfusion effects. We implemented an automated DPOP algorithm with an optional correction for low perfusion. These two algorithm variants (DPOPa and DPOPb) were tested on data from 20 mechanically ventilated OR patients split into a benign “stable region” subset and a whole record “global set.” Strong correlation was found between DPOP and PPV for both algorithms when applied to the stable data set: R = 0.83/0.85 for DPOPa/DPOPb. However, a marked improvement was found when applying the low perfusion correction to the global data set: R = 0.47/0.73 for DPOPa/DPOPb. Sensitivities, Specificities, and AUCs were 0.86, 0.70, and 0.88 for DPOPa/stable region; 0.89, 0.82, and 0.92 for DPOPb/stable region; 0.81, 0.61, and 0.73 for DPOPa/global region; 0.83, 0.76, and 0.86 for DPOPb/global region. An improvement was found in all results across both data sets when using the DPOPb algorithm. Further, DPOPb showed marked improvements, both in terms of its values, and correlation with PPV, for signals exhibiting low percent modulations. PMID:25177348

  11. Numerical analysis of moving contact line with contact angle hysteresis using feedback deceleration technique

    NASA Astrophysics Data System (ADS)

    Park, Jun Kwon; Kang, Kwan Hyoung

    2012-04-01

    Contact angle (CA) hysteresis is important in many natural and engineering wetting processes, but predicting it numerically is difficult. We developed an algorithm that considers CA hysteresis when analyzing the motion of the contact line (CL). This algorithm employs feedback control of CA which decelerates CL speed to make the CL stationary in the hysteretic range of CA, and one control coefficient should be heuristically determined depending on characteristic time of the simulated system. The algorithm requires embedding only a simple additional routine with little modification of a code which considers the dynamic CA. The method is non-iterative and explicit, and also has less computational load than other algorithms. For a drop hanging on a wire, the proposed algorithm accurately predicts the theoretical equilibrium CA. For the drop impacting on a dry surface, the results of the proposed algorithm agree well with experimental results including the intermittent occurrence of the pinning of CL. The proposed algorithm is as accurate as other algorithms, but faster.

  12. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  13. NPLOT: an Interactive Plotting Program for NASTRAN Finite Element Models

    NASA Technical Reports Server (NTRS)

    Jones, G. K.; Mcentire, K. J.

    1985-01-01

    The NPLOT (NASTRAN Plot) is an interactive computer graphics program for plotting undeformed and deformed NASTRAN finite element models. Developed at NASA's Goddard Space Flight Center, the program provides flexible element selection and grid point, ASET and SPC degree of freedom labelling. It is easy to use and provides a combination menu and command driven user interface. NPLOT also provides very fast hidden line and haloed line algorithms. The hidden line algorithm in NPLOT proved to be both very accurate and several times faster than other existing hidden line algorithms. A fast spatial bucket sort and horizon edge computation are used to achieve this high level of performance. The hidden line and the haloed line algorithms are the primary features that make NPLOT different from other plotting programs.

  14. Photovoltaic Cells Mppt Algorithm and Design of Controller Monitoring System

    NASA Astrophysics Data System (ADS)

    Meng, X. Z.; Feng, H. B.

    2017-10-01

    This paper combined the advantages of each maximum power point tracking (MPPT) algorithm, put forward a kind of algorithm with higher speed and higher precision, based on this algorithm designed a maximum power point tracking controller with ARM. The controller, communication technology and PC software formed a control system. Results of the simulation and experiment showed that the process of maximum power tracking was effective, and the system was stable.

  15. The recommendation of line-balancing improvement on MCM product line 1 using genetics algorithm and moodie young at XYZ Company, Co.

    NASA Astrophysics Data System (ADS)

    Sriwana, I. K.; Marie, I. A.; Mangala, D.

    2017-12-01

    Kencana Gemilang, Co. is one electronics industry engaging in the manufacture sector. This company manufactures and assembles household electronic products, such as rice cooker, fan, iron, blender, etc. The company deals with an issue of underachievement of an established production target on MCM products line 1. This study aimed to calculate line efficiencies, delay times, and initial line smoothness indexes. The research was carried out by means of depicting a precedence diagram and gathering time data of each work element followed by examination and calculation of standard time as well as line balancing using methods of Moodie Young and Generics Algorithm. Based on results of calculation, better line balancing than the existing initial conditions, i.e. improvement in the line efficiency by 18.39%, deterioration in balanced delay by 28.39%, and deterioration of a smoothness index by 23.85% was obtained.

  16. A Computational Framework for High-Throughput Isotopic Natural Abundance Correction of Omics-Level Ultra-High Resolution FT-MS Datasets

    PubMed Central

    Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.

    2013-01-01

    New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440

  17. Document localization algorithms based on feature points and straight lines

    NASA Astrophysics Data System (ADS)

    Skoryukina, Natalya; Shemiakina, Julia; Arlazarov, Vladimir L.; Faradjev, Igor

    2018-04-01

    The important part of the system of a planar rectangular object analysis is the localization: the estimation of projective transform from template image of an object to its photograph. The system also includes such subsystems as the selection and recognition of text fields, the usage of contexts etc. In this paper three localization algorithms are described. All algorithms use feature points and two of them also analyze near-horizontal and near- vertical lines on the photograph. The algorithms and their combinations are tested on a dataset of real document photographs. Also the method of localization quality estimation is proposed that allows configuring the localization subsystem independently of the other subsystems quality.

  18. Stable and accurate methods for identification of water bodies from Landsat series imagery using meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid

    2017-10-01

    Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.

  19. Successful Manipulation in Stable Marriage Model with Complete Preference Lists

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hirotatsu; Matsui, Tomomi

    This paper deals with a strategic issue in the stable marriage model with complete preference lists (i.e., a preference list of an agent is a permutation of all the members of the opposite sex). Given complete preference lists of n men over n women, and a marriage µ, we consider the problem for finding preference lists of n women over n men such that the men-proposing deferred acceptance algorithm (Gale-Shapley algorithm) adopted to the lists produces µ. We show a simple necessary and sufficient condition for the existence of a set of preference lists of women over men. Our condition directly gives an O(n2) time algorithm for finding a set of preference lists, if it exists.

  20. Pharmacogenetics-based warfarin dosing algorithm decreases time to stable anticoagulation and the risk of major hemorrhage: an updated meta-analysis of randomized controlled trials.

    PubMed

    Wang, Zhi-Quan; Zhang, Rui; Zhang, Peng-Pai; Liu, Xiao-Hong; Sun, Jian; Wang, Jun; Feng, Xiang-Fei; Lu, Qiu-Fen; Li, Yi-Gang

    2015-04-01

    Warfarin is yet the most widely used oral anticoagulant for thromboembolic diseases, despite the recently emerged novel anticoagulants. However, difficulty in maintaining stable dose within the therapeutic range and subsequent serious adverse effects markedly limited its use in clinical practice. Pharmacogenetics-based warfarin dosing algorithm is a recently emerged strategy to predict the initial and maintaining dose of warfarin. However, whether this algorithm is superior over conventional clinically guided dosing algorithm remains controversial. We made a comparison of pharmacogenetics-based versus clinically guided dosing algorithm by an updated meta-analysis. We searched OVID MEDLINE, EMBASE, and the Cochrane Library for relevant citations. The primary outcome was the percentage of time in therapeutic range. The secondary outcomes were time to stable therapeutic dose and the risks of adverse events including all-cause mortality, thromboembolic events, total bleedings, and major bleedings. Eleven randomized controlled trials with 2639 participants were included. Our pooled estimates indicated that pharmacogenetics-based dosing algorithm did not improve percentage of time in therapeutic range [weighted mean difference, 4.26; 95% confidence interval (CI), -0.50 to 9.01; P = 0.08], but it significantly shortened the time to stable therapeutic dose (weighted mean difference, -8.67; 95% CI, -11.86 to -5.49; P < 0.00001). Additionally, pharmacogenetics-based algorithm significantly reduced the risk of major bleedings (odds ratio, 0.48; 95% CI, 0.23 to 0.98; P = 0.04), but it did not reduce the risks of all-cause mortality, total bleedings, or thromboembolic events. Our results suggest that pharmacogenetics-based warfarin dosing algorithm significantly improves the efficiency of International Normalized Ratio correction and reduces the risk of major hemorrhage.

  1. Dehazed Image Quality Assessment by Haze-Line Theory

    NASA Astrophysics Data System (ADS)

    Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai

    2017-06-01

    Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.

  2. First Line Treatment Response in Patients with Transmitted HIV Drug Resistance and Well Defined Time Point of HIV Infection: Updated Results from the German HIV-1 Seroconverter Study

    PubMed Central

    zu Knyphausen, Fabia; Scheufele, Ramona; Kücherer, Claudia; Jansen, Klaus; Somogyi, Sybille; Dupke, Stephan; Jessen, Heiko; Schürmann, Dirk; Hamouda, Osamah; Meixenberger, Karolin; Bartmeyer, Barbara

    2014-01-01

    Background Transmission of drug-resistant HIV-1 (TDR) can impair the virologic response to antiretroviral combination therapy. Aim of the study was to assess the impact of TDR on treatment success of resistance test-guided first-line therapy in the German HIV-1 Seroconverter Cohort for patients infected with HIV between 1996 and 2010. An update of the prevalence of TDR and trend over time was performed. Methods Data of 1,667 HIV-infected individuals who seroconverted between 1996 and 2010 were analysed. The WHO drug resistance mutations list was used to identify resistance-associated HIV mutations in drug-naïve patients for epidemiological analysis. For treatment success analysis the Stanford algorithm was used to classify a subset of 323 drug-naïve genotyped patients who received a first-line cART into three resistance groups: patients without TDR, patients with TDR and fully active cART and patients with TDR and non-fully active cART. The frequency of virologic failure 5 to 12 months after treatment initiation was determined. Results Prevalence of TDR was stable at a high mean level of 11.9% (198/1,667) in the HIV-1 Seroconverter Cohort without significant trend over time. Nucleotide reverse transcriptase inhibitor resistance was predominant (6.0%) and decreased significantly over time (OR = 0.92, CI = 0.87–0.98, p = 0.01). Non-nucleoside reverse transcriptase inhibitor (2.4%; OR = 1.00, CI = 0.92–1.09, p = 0.96) and protease inhibitor resistance (2.0%; OR = 0.94, CI = 0.861.03, p = 0.17) remained stable. Virologic failure was observed in 6.5% of patients with TDR receiving fully active cART, 5,6% of patients with TDR receiving non-fully active cART and 3.2% of patients without TDR. The difference between the three groups was not significant (p = 0.41). Conclusion Overall prevalence of TDR remained stable at a rather high level. No significant differences in the frequency of virologic failure were identified during first-line cART between patients with TDR and fully-active cART, patients with TDR and non-fully active cART and patients without TDR. PMID:24788613

  3. A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng

    2018-04-01

    The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.

  4. On Nash-Equilibria of Approximation-Stable Games

    NASA Astrophysics Data System (ADS)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  5. A study of real-time computer graphic display technology for aeronautical applications

    NASA Technical Reports Server (NTRS)

    Rajala, S. A.

    1981-01-01

    The development, simulation, and testing of an algorithm for anti-aliasing vector drawings is discussed. The pseudo anti-aliasing line drawing algorithm is an extension to Bresenham's algorithm for computer control of a digital plotter. The algorithm produces a series of overlapping line segments where the display intensity shifts from one segment to the other in this overlap (transition region). In this algorithm the length of the overlap and the intensity shift are essentially constants because the transition region is an aid to the eye in integrating the segments into a single smooth line.

  6. Packings of a charged line on a sphere.

    PubMed

    Alben, Silas

    2008-12-01

    We find equilibrium configurations of open and closed lines of charge on a sphere, and track them with respect to varying sphere radius. Closed lines transition from a circle to a spiral-like shape through two low-wave-number bifurcations-"baseball seam" and "twist"-which minimize Coulomb energy. The spiral shape is the unique stable equilibrium of the closed line. Other unstable equilibria arise through tip-splitting events. An open line transitions smoothly from an arc of a great circle to a spiral as the sphere radius decreases. Under repulsive potentials with faster-than-Coulomb power-law decay, the spiral is tighter in initial stages of sphere shrinkage, but at later stages of shrinkage the equilibria for all repulsive potentials converge on a spiral with uniform spacing between turns. Multiple stable equilibria of the open line are observed.

  7. An efficient interior-point algorithm with new non-monotone line search filter method for nonlinear constrained programming

    NASA Astrophysics Data System (ADS)

    Wang, Liwei; Liu, Xinggao; Zhang, Zeyin

    2017-02-01

    An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.

  8. Carbon monoxide mixing ratio inference from gas filter radiometer data

    NASA Technical Reports Server (NTRS)

    Wallio, H. A.; Reichle, H. G., Jr.; Casas, J. C.; Saylor, M. S.; Gormsen, B. B.

    1983-01-01

    A new algorithm has been developed which permits, for the first time, real time data reduction of nadir measurements taken with a gas filter correlation radiometer to determine tropospheric carbon monoxide concentrations. The algorithm significantly reduces the complexity of the equations to be solved while providing accuracy comparable to line-by-line calculations. The method is based on a regression analysis technique using a truncated power series representation of the primary instrument output signals to infer directly a weighted average of trace gas concentration. The results produced by a microcomputer-based implementation of this technique are compared with those produced by the more rigorous line-by-line methods. This algorithm has been used in the reduction of Measurement of Air Pollution from Satellites, Shuttle, and aircraft data.

  9. Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow.

    PubMed

    Zhang, Weilong; Guo, Bingxuan; Li, Ming; Liao, Xuan; Li, Wenzhuo

    2018-04-16

    Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images.

  10. A maximally stable extremal region based scene text localization method

    NASA Astrophysics Data System (ADS)

    Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei

    2015-07-01

    Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.

  11. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  12. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  13. Extension of an iterative closest point algorithm for simultaneous localization and mapping in corridor environments

    NASA Astrophysics Data System (ADS)

    Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua

    2016-03-01

    Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.

  14. An enhanced multi-view vertical line locus matching algorithm of object space ground primitives based on positioning consistency for aerial and space images

    NASA Astrophysics Data System (ADS)

    Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia

    2018-05-01

    The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.

  15. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.

  16. A novel acenocoumarol pharmacogenomic dosing algorithm for the Greek population of EU-PACT trial.

    PubMed

    Ragia, Georgia; Kolovou, Vana; Kolovou, Genovefa; Konstantinides, Stavros; Maltezos, Efstratios; Tavridou, Anna; Tziakas, Dimitrios; Maitland-van der Zee, Anke H; Manolopoulos, Vangelis G

    2017-01-01

    To generate and validate a pharmacogenomic-guided (PG) dosing algorithm for acenocoumarol in the Greek population. To compare its performance with other PG algorithms developed for the Greek population. A total of 140 Greek patients participants of the EU-PACT trial for acenocoumarol, a randomized clinical trial that prospectively compared the effect of a PG dosing algorithm with a clinical dosing algorithm on the percentage of time within INR therapeutic range, who reached acenocoumarol stable dose were included in the study. CYP2C9 and VKORC1 genotypes, age and weight affected acenocoumarol dose and predicted 53.9% of its variability. EU-PACT PG algorithm overestimated acenocoumarol dose across all different CYP2C9/VKORC1 functional phenotype bins (predicted dose vs stable dose in normal responders 2.31 vs 2.00 mg/day, p = 0.028, in sensitive responders 1.72 vs 1.50 mg/day, p = 0.003, in highly sensitive responders 1.39 vs 1.00 mg/day, p = 0.029). The PG algorithm previously developed for the Greek population overestimated the dose in normal responders (2.51 vs 2.00 mg/day, p < 0.001). Ethnic-specific dosing algorithm is suggested for better prediction of acenocoumarol dosage requirements in patients of Greek origin.

  17. On the Complexity of the Metric TSP under Stability Considerations

    NASA Astrophysics Data System (ADS)

    Mihalák, Matúš; Schöngens, Marcel; Šrámek, Rastislav; Widmayer, Peter

    We consider the metric Traveling Salesman Problem (Δ-TSP for short) and study how stability (as defined by Bilu and Linial [3]) influences the complexity of the problem. On an intuitive level, an instance of Δ-TSP is γ-stable (γ> 1), if there is a unique optimum Hamiltonian tour and any perturbation of arbitrary edge weights by at most γ does not change the edge set of the optimal solution (i.e., there is a significant gap between the optimum tour and all other tours). We show that for γ ≥ 1.8 a simple greedy algorithm (resembling Prim's algorithm for constructing a minimum spanning tree) computes the optimum Hamiltonian tour for every γ-stable instance of the Δ-TSP, whereas a simple local search algorithm can fail to find the optimum even if γ is arbitrary. We further show that there are γ-stable instances of Δ-TSP for every 1 < γ< 2. These results provide a different view on the hardness of the Δ-TSP and give rise to a new class of problem instances which are substantially easier to solve than instances of the general Δ-TSP.

  18. Structure, Elastic Constants and XRD Spectra of Extended Solids under High Pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batyrev, I. G.; Coleman, S. P.; Ciezak-Jenkins, J. A.

    We present results of evolutionary simulations based on density functional calculations of a potentially new type of energetic materials called extended solids: P-N and N-H. High-density structures with covalent bonds generated using variable and fixed concentration methods were analysed in terms of thermo-dynamical stability and agreement with experimental X-ray diffraction (XRD) spectra. X-ray diffraction spectra were calculated using a virtual diffraction algorithm that computes kinematic diffraction intensity in three-dimensional reciprocal space before being reduced to a two-theta line profile. Calculated XRD patterns were used to search for the structure of extended solids present at experimental pressures by optimizing data accordingmore » to experimental XRD peak position, peak intensity and theoretically calculated enthalpy. Elastic constants has been calculated for thermodynamically stable structures of P-N system.« less

  19. Autonomous reinforcement learning with experience replay.

    PubMed

    Wawrzyński, Paweł; Tanwani, Ajay Kumar

    2013-05-01

    This paper considers the issues of efficiency and autonomy that are required to make reinforcement learning suitable for real-life control tasks. A real-time reinforcement learning algorithm is presented that repeatedly adjusts the control policy with the use of previously collected samples, and autonomously estimates the appropriate step-sizes for the learning updates. The algorithm is based on the actor-critic with experience replay whose step-sizes are determined on-line by an enhanced fixed point algorithm for on-line neural network training. An experimental study with simulated octopus arm and half-cheetah demonstrates the feasibility of the proposed algorithm to solve difficult learning control problems in an autonomous way within reasonably short time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Research on cutting path optimization of sheet metal parts based on ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Ling, H.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    In view of the disadvantages of the current cutting path optimization methods of sheet metal parts, a new method based on ant colony algorithm was proposed in this paper. The cutting path optimization problem of sheet metal parts was taken as the research object. The essence and optimization goal of the optimization problem were presented. The traditional serial cutting constraint rule was improved. The cutting constraint rule with cross cutting was proposed. The contour lines of parts were discretized and the mathematical model of cutting path optimization was established. Thus the problem was converted into the selection problem of contour lines of parts. Ant colony algorithm was used to solve the problem. The principle and steps of the algorithm were analyzed.

  1. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    NASA Astrophysics Data System (ADS)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  2. The Use of Transcription Terminators to Generate Transgenic Lines of Chinese Hamster Ovary Cells (CHO) with Stable and High Level of Reporter Gene Expression.

    PubMed

    Gasanov, N B; Toshchakov, S V; Georgiev, P G; Maksimenko, O G

    2015-01-01

    Mammalian cell lines are widely used to produce recombinant proteins. Stable transgenic cell lines usually contain many insertions of the expression vector in one genomic region. Transcription through transgene can be one of the reasons for target gene repression after prolonged cultivation of cell lines. In the present work, we used the known transcription terminators from the SV40 virus, as well as the human β- and γ-globin genes, to prevent transcription through transgene. The transcription terminators were shown to increase and stabilize the expression of the EGFP reporter gene in transgenic lines of Chinese hamster ovary (CHO) cells. Hence, transcription terminators can be used to create stable mammalian cells with a high and stable level of recombinant protein production.

  3. A study on low-cost, high-accuracy, and real-time stereo vision algorithms for UAV power line inspection

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue

    2018-04-01

    Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.

  4. Fully Dynamic Bin Packing

    NASA Astrophysics Data System (ADS)

    Ivković, Zoran; Lloyd, Errol L.

    Classic bin packing seeks to pack a given set of items of possibly varying sizes into a minimum number of identical sized bins. A number of approximation algorithms have been proposed for this NP-hard problem for both the on-line and off-line cases. In this chapter we discuss fully dynamic bin packing, where items may arrive (Insert) and depart (Delete) dynamically. In accordance with standard practice for fully dynamic algorithms, it is assumed that the packing may be arbitrarily rearranged to accommodate arriving and departing items. The goal is to maintain an approximately optimal solution of provably high quality in a total amount of time comparable to that used by an off-line algorithm delivering a solution of the same quality.

  5. Airborne Lidar Measurements of Atmospheric Pressure Made Using the Oxygen A-Band

    NASA Technical Reports Server (NTRS)

    Riris, Haris; Rodriquez, Michael D.; Allan, Graham R.; Hasselbrack, William E.; Mao, Jianping; Stephen, Mark A.; Abshire, James B.

    2012-01-01

    Accurate measurements of greenhouse gas mixing ratios on a global scale are currently needed to gain a better understanding of climate change and its possible impact on our planet. In order to remotely measure greenhouse gas concentrations in the atmosphere with regard to dry air, the air number density in the atmosphere is also needed in deriving the greenhouse gas concentrations. Since oxygen is stable and uniformly mixed in the atmosphere at 20.95%, the measurement of an oxygen absorption in the atmosphere can be used to infer the dry air density and used to calculate the dry air mixing ratio of a greenhouse gas, such as carbon dioxide or methane. OUT technique of measuring Oxygen uses integrated path differential absorption (IPDA) with an Erbium Doped Fiber Amplifier (EDF A) laser system and single photon counting module (SPCM). It measures the absorbance of several on- and off-line wavelengths tuned to an O2 absorption line in the A-band at 764.7 nm. The choice of wavelengths allows us to maximize the pressure sensitivity using the trough between two absorptions in the Oxygen A-band. Our retrieval algorithm uses ancillary meteorological and aircraft altitude information to fit the experimentally obtained lidar O2 line shapes to a model atmosphere and derives the pressure from the profiles of the two lines. We have demonstrated O2 measurements from the ground and from an airborne platform. In this paper we will report on our airborne measurements during our 2011 campaign for the ASCENDS program.

  6. A simulation-based approach for solving assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyu

    2017-09-01

    Assembly line balancing problem is directly related to the production efficiency, since the last century, the problem of assembly line balancing was discussed and still a lot of people are studying on this topic. In this paper, the problem of assembly line is studied by establishing the mathematical model and simulation. Firstly, the model of determing the smallest production beat under certain work station number is anysized. Based on this model, the exponential smoothing approach is applied to improve the the algorithm efficiency. After the above basic work, the gas stirling engine assembly line balancing problem is discussed as a case study. Both two algorithms are implemented using the Lingo programming environment and the simulation results demonstrate the validity of the new methods.

  7. Analysis of the type II robotic mixed-model assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Çil, Zeynel Abidin; Mete, Süleyman; Ağpak, Kürşad

    2017-06-01

    In recent years, there has been an increasing trend towards using robots in production systems. Robots are used in different areas such as packaging, transportation, loading/unloading and especially assembly lines. One important step in taking advantage of robots on the assembly line is considering them while balancing the line. On the other hand, market conditions have increased the importance of mixed-model assembly lines. Therefore, in this article, the robotic mixed-model assembly line balancing problem is studied. The aim of this study is to develop a new efficient heuristic algorithm based on beam search in order to minimize the sum of cycle times over all models. In addition, mathematical models of the problem are presented for comparison. The proposed heuristic is tested on benchmark problems and compared with the optimal solutions. The results show that the algorithm is very competitive and is a promising tool for further research.

  8. Research on target tracking algorithm based on spatio-temporal context

    NASA Astrophysics Data System (ADS)

    Li, Baiping; Xu, Sanmei; Kang, Hongjuan

    2017-07-01

    In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.

  9. Stereo vision with distance and gradient recognition

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  10. Discovery of a meta-stable Al–Sm phase with unknown stoichiometry using a genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Feng; McBrearty, Ian; Ott, R T

    Unknown crystalline phases observed during the devitrification process of glassy metal alloys significantly limit our ability to understand and control phase selection in these systems driven far from equilibrium. Here, we report a new meta-stable Al5Sm phase identified by simultaneously searching Al-rich compositions of the Al-Sm system, using an efficient genetic algorithm. The excellent match between calculated and experimental X-ray diffraction patterns confirms that this new phase appeared in the crystallization of melt-spun Al90Sm10 alloys. Published by Elsevier Ltd. on behalf of Acta Materialia Inc.

  11. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  12. The Design and Operation of Ultra-Sensitive and Tunable Radio-Frequency Interferometers.

    PubMed

    Cui, Yan; Wang, Pingshan

    2014-12-01

    Dielectric spectroscopy (DS) is an important technique for scientific and technological investigations in various areas. DS sensitivity and operating frequency ranges are critical for many applications, including lab-on-chip development where sample volumes are small with a wide range of dynamic processes to probe. In this work, we present the design and operation considerations of radio-frequency (RF) interferometers that are based on power-dividers (PDs) and quadrature-hybrids (QHs). Such interferometers are proposed to address the sensitivity and frequency tuning challenges of current DS techniques. Verified algorithms together with mathematical models are presented to quantify material properties from scattering parameters for three common transmission line sensing structures, i.e., coplanar waveguides (CPWs), conductor-backed CPWs, and microstrip lines. A high-sensitivity and stable QH-based interferometer is demonstrated by measuring glucose-water solution at a concentration level that is ten times lower than some recent RF sensors while our sample volume is ~1 nL. Composition analysis of ternary mixture solutions are also demonstrated with a PD-based interferometer. Further work is needed to address issues like system automation, model improvement at high frequencies, and interferometer scaling.

  13. Research on conflict detection algorithm in 3D visualization environment of urban rail transit line

    NASA Astrophysics Data System (ADS)

    Wang, Li; Xiong, Jing; You, Kuokuo

    2017-03-01

    In this paper, a method of collision detection is introduced, and the theory of three-dimensional modeling of underground buildings and urban rail lines is realized by rapidly extracting the buildings that are in conflict with the track area in the 3D visualization environment. According to the characteristics of the buildings, CSG and B-rep are used to model the buildings based on CSG and B-rep. On the basis of studying the modeling characteristics, this paper proposes to use the AABB level bounding volume method to detect the first conflict and improve the detection efficiency, and then use the triangular rapid intersection detection algorithm to detect the conflict, and finally determine whether the building collides with the track area. Through the algorithm of this paper, we can quickly extract buildings colliding with the influence area of the track line, so as to help the line design, choose the best route and calculate the cost of land acquisition in the three-dimensional visualization environment.

  14. Female mice lack adult germ-line stem cells but sustain oogenesis using stable primordial follicles.

    PubMed

    Lei, Lei; Spradling, Allan C

    2013-05-21

    Whether or not mammalian females generate new oocytes during adulthood from germ-line stem cells to sustain the ovarian follicle pool has recently generated controversy. We used a sensitive lineage-labeling system to determine whether stem cells are needed in female adult mice to compensate for follicular losses and to directly identify active germ-line stem cells. Primordial follicles generated during fetal life are highly stable, with a half-life during adulthood of 10 mo, and thus are sufficient to sustain adult oogenesis without a source of renewal. Moreover, in normal mice or following germ-cell depletion with Busulfan, only stable, single oocytes are lineage-labeled, rather than cell clusters indicative of new oocyte formation. Even one germ-line stem cell division per 2 wk would have been detected by our method, based on the kinetics of fetal follicle formation. Thus, adult female mice neither require nor contain active germ-line stem cells or produce new oocytes in vivo.

  15. New vision system and navigation algorithm for an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.

    2013-12-01

    Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.

  16. A novel symbiotic organisms search algorithm for congestion management in deregulated environment

    NASA Astrophysics Data System (ADS)

    Verma, Sumit; Saha, Subhodip; Mukherjee, V.

    2017-01-01

    In today's competitive electricity market, managing transmission congestion in deregulated power system has created challenges for independent system operators to operate the transmission lines reliably within the limits. This paper proposes a new meta-heuristic algorithm, called as symbiotic organisms search (SOS) algorithm, for congestion management (CM) problem in pool based electricity market by real power rescheduling of generators. Inspired by interactions among organisms in ecosystem, SOS algorithm is a recent population based algorithm which does not require any algorithm specific control parameters unlike other algorithms. Various security constraints such as load bus voltage and line loading are taken into account while dealing with the CM problem. In this paper, the proposed SOS algorithm is applied on modified IEEE 30- and 57-bus test power system for the solution of CM problem. The results, thus, obtained are compared to those reported in the recent state-of-the-art literature. The efficacy of the proposed SOS algorithm for obtaining the higher quality solution is also established.

  17. A novel symbiotic organisms search algorithm for congestion management in deregulated environment

    NASA Astrophysics Data System (ADS)

    Verma, Sumit; Saha, Subhodip; Mukherjee, V.

    2017-01-01

    In today's competitive electricity market, managing transmission congestion in deregulated power system has created challenges for independent system operators to operate the transmission lines reliably within the limits. This paper proposes a new meta-heuristic algorithm, called as symbiotic organisms search (SOS) algorithm, for congestion management (CM) problem in pool-based electricity market by real power rescheduling of generators. Inspired by interactions among organisms in ecosystem, SOS algorithm is a recent population-based algorithm which does not require any algorithm specific control parameters unlike other algorithms. Various security constraints such as load bus voltage and line loading are taken into account while dealing with the CM problem. In this paper, the proposed SOS algorithm is applied on modified IEEE 30- and 57-bus test power system for the solution of CM problem. The results, thus, obtained are compared to those reported in the recent state-of-the-art literature. The efficacy of the proposed SOS algorithm for obtaining the higher quality solution is also established.

  18. Scan-Line Methods in Spatial Data Systems

    DTIC Science & Technology

    1990-09-04

    algorithms in detail to show some of the implementation issues. Data Compression Storage and transmission times can be reduced by using compression ...goes through the data . Luckily, there are good one-directional compression algorithms , such as run-length coding 13 in which each scan line can be...independently compressed . These are the algorithms to use in a parallel scan-line system. Data compression is usually only used for long-term storage of

  19. Basic test framework for the evaluation of text line segmentation and text parameter extraction.

    PubMed

    Brodić, Darko; Milivojević, Dragan R; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  20. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    PubMed Central

    Brodić, Darko; Milivojević, Dragan R.; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms. PMID:22399932

  1. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    PubMed

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  2. The threshold algorithm: Description of the methodology and new developments

    NASA Astrophysics Data System (ADS)

    Neelamraju, Sridhar; Oligschleger, Christina; Schön, J. Christian

    2017-10-01

    Understanding the dynamics of complex systems requires the investigation of their energy landscape. In particular, the flow of probability on such landscapes is a central feature in visualizing the time evolution of complex systems. To obtain such flows, and the concomitant stable states of the systems and the generalized barriers among them, the threshold algorithm has been developed. Here, we describe the methodology of this approach starting from the fundamental concepts in complex energy landscapes and present recent new developments, the threshold-minimization algorithm and the molecular dynamics threshold algorithm. For applications of these new algorithms, we draw on landscape studies of three disaccharide molecules: lactose, maltose, and sucrose.

  3. Sleeping Beauty transposon-based system for rapid generation of HBV-replicating stable cell lines.

    PubMed

    Wu, Yong; Zhang, Tian-Ying; Fang, Lin-Lin; Chen, Zi-Xuan; Song, Liu-Wei; Cao, Jia-Li; Yang, Lin; Yuan, Quan; Xia, Ning-Shao

    2016-08-01

    The stable HBV-replicating cell lines, which carry replication-competent HBV genome stably integrated into the genome of host cell, are widely used to evaluate the effects of antiviral agents. However, current methods to generate HBV-replicating cell lines, which are mostly dependent on random integration of foreign DNA via plasmid transfection, are less-efficient and time-consuming. To address this issue, we constructed an all-in-one Sleeping Beauty transposon system (denoted pTSMP-HBV vector) for robust generation of stable cell lines carrying replication-competent HBV genome of different genotype. This vector contains a Sleeping Beauty transposon containing HBV 1.3-copy genome with an expression cassette of the SV40 promoter driving red fluorescent protein (mCherry) and self-cleaving P2A peptide linked puromycin resistance gene (PuroR). In addition, a PGK promoter-driven SB100X hyperactive transposase cassette is placed in the outside of the transposon in the same plasmid.The HBV-replicating stable cells could be obtained from pTSMP-HBV transfected HepG2 cells by red fluorescence-activated cell sorting and puromycin resistant cell selection within 4-week. Using this system, we successfully constructed four cell lines carrying replication-competent HBV genome of genotypes A-D. The replication and viral protein expression profiles of these cells were systematically characterized. In conclusion, our study provides a high-efficiency strategy to generate HBV-replicating stable cell lines, which may facilitate HBV-related virological study. Copyright © 2016. Published by Elsevier B.V.

  4. An implicit adaptation algorithm for a linear model reference control system

    NASA Technical Reports Server (NTRS)

    Mabius, L.; Kaufman, H.

    1975-01-01

    This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.

  5. Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing

    PubMed Central

    Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang

    2018-01-01

    Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855

  6. Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.

    PubMed

    Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang

    2018-02-15

    Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.

  7. Extraction of line properties based on direction fields.

    PubMed

    Kutka, R; Stier, S

    1996-01-01

    The authors present a new set of algorithms for segmenting lines, mainly blood vessels in X-ray images, and extracting properties such as their intensities, diameters, and center lines. The authors developed a tracking algorithm that checks rules taking the properties of vessels into account. The tools even detect veins, arteries, or catheters of two pixels in diameter and with poor contrast. Compared with other algorithms, such as the Canny line detector or anisotropic diffusion, the authors extract a smoother and connected vessel tree without artifacts in the image background. As the tools depend on common intermediate results, they are very fast when used together. The authors' results will support the 3-D reconstruction of the vessel tree from stereoscopic projections. Moreover, the authors make use of their line intensity measure for enhancing and improving the visibility of vessels in 3-D X-ray images. The processed images are intended to support radiologists in diagnosis, radiation therapy planning, and surgical planning. Radiologists verified the improved quality of the processed images and the enhanced visibility of relevant details, particularly fine blood vessels.

  8. An improved silhouette for human pose estimation

    NASA Astrophysics Data System (ADS)

    Hawes, Anthony H.; Iftekharuddin, Khan M.

    2017-08-01

    We propose a novel method for analyzing images that exploits the natural lines of a human poses to find areas where self-occlusion could be present. Errors caused by self-occlusion cause several modern human pose estimation methods to mis-identify body parts, which reduces the performance of most action recognition algorithms. Our method is motivated by the observation that, in several cases, occlusion can be reasoned using only boundary lines of limbs. An intelligent edge detection algorithm based on the above principle could be used to augment the silhouette with information useful for pose estimation algorithms and push forward progress on occlusion handling for human action recognition. The algorithm described is applicable to computer vision scenarios involving 2D images and (appropriated flattened) 3D images.

  9. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion

    PubMed Central

    Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.

    2016-01-01

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation. PMID:27827836

  10. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    PubMed

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  11. Joint optimization of maintenance, buffers and machines in manufacturing lines

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Nourelfath, Mustapha

    2018-01-01

    This article considers a series manufacturing line composed of several machines separated by intermediate buffers of finite capacity. The goal is to find the optimal number of preventive maintenance actions performed on each machine, the optimal selection of machines and the optimal buffer allocation plan that minimize the total system cost, while providing the desired system throughput level. The mean times between failures of all machines are assumed to increase when applying periodic preventive maintenance. To estimate the production line throughput, a decomposition method is used. The decision variables in the formulated optimal design problem are buffer levels, types of machines and times between preventive maintenance actions. Three heuristic approaches are developed to solve the formulated combinatorial optimization problem. The first heuristic consists of a genetic algorithm, the second is based on the nonlinear threshold accepting metaheuristic and the third is an ant colony system. The proposed heuristics are compared and their efficiency is shown through several numerical examples. It is found that the nonlinear threshold accepting algorithm outperforms the genetic algorithm and ant colony system, while the genetic algorithm provides better results than the ant colony system for longer manufacturing lines.

  12. Estimating the coordinates of pillars and posts in the parking lots for intelligent parking assist system

    NASA Astrophysics Data System (ADS)

    Choi, Jae Hyung; Kuk, Jung Gap; Kim, Young Il; Cho, Nam Ik

    2012-01-01

    This paper proposes an algorithm for the detection of pillars or posts in the video captured by a single camera implemented on the fore side of a room mirror in a car. The main purpose of this algorithm is to complement the weakness of current ultrasonic parking assist system, which does not well find the exact position of pillars or does not recognize narrow posts. The proposed algorithm is consisted of three steps: straight line detection, line tracking, and the estimation of 3D position of pillars. In the first step, the strong lines are found by the Hough transform. Second step is the combination of detection and tracking, and the third is the calculation of 3D position of the line by the analysis of trajectory of relative positions and the parameters of camera. Experiments on synthetic and real images show that the proposed method successfully locates and tracks the position of pillars, which helps the ultrasonic system to correctly locate the edges of pillars. It is believed that the proposed algorithm can also be employed as a basic element for vision based autonomous driving system.

  13. Multispectral fluorescence image algorithms for detection of frass on mature tomatoes

    USDA-ARS?s Scientific Manuscript database

    A multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at five wavebands, 515 nm, 640 nm, 664 nm, 690 nm, and 724 nm...

  14. a Line-Based 3d Roof Model Reconstruction Algorithm: Tin-Merging and Reshaping (tmr)

    NASA Astrophysics Data System (ADS)

    Rau, J.-Y.

    2012-07-01

    Three-dimensional building model is one of the major components of a cyber-city and is vital for the realization of 3D GIS applications. In the last decade, the airborne laser scanning (ALS) data is widely used for 3D building model reconstruction and object extraction. Instead, based on 3D roof structural lines, this paper presents a novel algorithm for automatic roof models reconstruction. A line-based roof model reconstruction algorithm, called TIN-Merging and Reshaping (TMR), is proposed. The roof structural line, such as edges, eaves and ridges, can be measured manually from aerial stereo-pair, derived by feature line matching or inferred from ALS data. The originality of the TMR algorithm for 3D roof modelling is to perform geometric analysis and topology reconstruction among those unstructured lines and then reshapes the roof-type using elevation information from the 3D structural lines. For topology reconstruction, a line constrained Delaunay Triangulation algorithm is adopted where the input structural lines act as constraint and their vertex act as input points. Thus, the constructed TINs will not across the structural lines. Later at the stage of Merging, the shared edge between two TINs will be check if the original structural line exists. If not, those two TINs will be merged into a polygon. Iterative checking and merging of any two neighboured TINs/Polygons will result in roof polygons on the horizontal plane. Finally, at the Reshaping stage any two structural lines with fixed height will be used to adjust a planar function for the whole roof polygon. In case ALS data exist, the Reshaping stage can be simplified by adjusting the point cloud within the roof polygon. The proposed scheme reduces the complexity of 3D roof modelling and makes the modelling process easier. Five test datasets provided by ISPRS WG III/4 located at downtown Toronto, Canada and Vaihingen, Germany are used for experiment. The test sites cover high rise buildings and residential area with diverse roof type. For performance evaluation, the adopted roof structural lines are manually measured from the provided stereo-pair. Experimental results indicate a nearly 100% success rate for topology reconstruction was achieved provided that the 3D structural lines can be enclosed as polygons. On the other hand, the success rate at the Reshaping stage is dependent on the complexity of the rooftop structure. Thus, a visual inspection and semi-automatic adjustment of roof-type is suggested and implemented to complete the roof modelling. The results demonstrate that the proposed scheme is robust and reliable with a high degree of completeness, correctness, and quality, even when a group of connected buildings with multiple layers and mixed roof types is processed.

  15. Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi

    1996-01-01

    An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

  16. Deformed Palmprint Matching Based on Stable Regions.

    PubMed

    Wu, Xiangqian; Zhao, Qiushi

    2015-12-01

    Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.

  17. Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow

    PubMed Central

    Zhang, Weilong; Guo, Bingxuan; Liao, Xuan; Li, Wenzhuo

    2018-01-01

    Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images. PMID:29659526

  18. Clustering algorithms for identifying core atom sets and for assessing the precision of protein structure ensembles.

    PubMed

    Snyder, David A; Montelione, Gaetano T

    2005-06-01

    An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.

  19. Stable Extraction of Threshold Voltage Using Transconductance Change Method for CMOS Modeling, Simulation and Characterization

    NASA Astrophysics Data System (ADS)

    Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook

    2004-04-01

    We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.

  20. Algorithm-Dependent Generalization Bounds for Multi-Task Learning.

    PubMed

    Liu, Tongliang; Tao, Dacheng; Song, Mingli; Maybank, Stephen J

    2017-02-01

    Often, tasks are collected for multi-task learning (MTL) because they share similar feature structures. Based on this observation, in this paper, we present novel algorithm-dependent generalization bounds for MTL by exploiting the notion of algorithmic stability. We focus on the performance of one particular task and the average performance over multiple tasks by analyzing the generalization ability of a common parameter that is shared in MTL. When focusing on one particular task, with the help of a mild assumption on the feature structures, we interpret the function of the other tasks as a regularizer that produces a specific inductive bias. The algorithm for learning the common parameter, as well as the predictor, is thereby uniformly stable with respect to the domain of the particular task and has a generalization bound with a fast convergence rate of order O(1/n), where n is the sample size of the particular task. When focusing on the average performance over multiple tasks, we prove that a similar inductive bias exists under certain conditions on the feature structures. Thus, the corresponding algorithm for learning the common parameter is also uniformly stable with respect to the domains of the multiple tasks, and its generalization bound is of the order O(1/T), where T is the number of tasks. These theoretical analyses naturally show that the similarity of feature structures in MTL will lead to specific regularizations for predicting, which enables the learning algorithms to generalize fast and correctly from a few examples.

  1. Common lines modeling for reference free Ab-initio reconstruction in cryo-EM.

    PubMed

    Greenberg, Ido; Shkolnisky, Yoel

    2017-11-01

    We consider the problem of estimating an unbiased and reference-free ab initio model for non-symmetric molecules from images generated by single-particle cryo-electron microscopy. The proposed algorithm finds the globally optimal assignment of orientations that simultaneously respects all common lines between all images. The contribution of each common line to the estimated orientations is weighted according to a statistical model for common lines' detection errors. The key property of the proposed algorithm is that it finds the global optimum for the orientations given the common lines. In particular, any local optima in the common lines energy landscape do not affect the proposed algorithm. As a result, it is applicable to thousands of images at once, very robust to noise, completely reference free, and not biased towards any initial model. A byproduct of the algorithm is a set of measures that allow to asses the reliability of the obtained ab initio model. We demonstrate the algorithm using class averages from two experimental data sets, resulting in ab initio models with resolutions of 20Å or better, even from class averages consisting of as few as three raw images per class. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Meta-heuristic algorithm to solve two-sided assembly line balancing problems

    NASA Astrophysics Data System (ADS)

    Wirawan, A. D.; Maruf, A.

    2016-02-01

    Two-sided assembly line is a set of sequential workstations where task operations can be performed at two sides of the line. This type of line is commonly used for the assembly of large-sized products: cars, buses, and trucks. This paper propose a Decoding Algorithm with Teaching-Learning Based Optimization (TLBO), a recently developed nature-inspired search method to solve the two-sided assembly line balancing problem (TALBP). The algorithm aims to minimize the number of mated-workstations for the given cycle time without violating the synchronization constraints. The correlation between the input parameters and the emergence point of objective function value is tested using scenarios generated by design of experiments. A two-sided assembly line operated in an Indonesia's multinational manufacturing company is considered as the object of this paper. The result of the proposed algorithm shows reduction of workstations and indicates that there is negative correlation between the emergence point of objective function value and the size of population used.

  3. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    PubMed

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. On-Line Point Positioning with Single Frame Camera Data

    DTIC Science & Technology

    1992-03-15

    tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile

  5. Study of texture stitching in 3D modeling of lidar point cloud based on per-pixel linear interpolation along loop line buffer

    NASA Astrophysics Data System (ADS)

    Xu, Jianxin; Liang, Hong

    2013-07-01

    Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.

  6. TRANSP-based Trajectory Optimization of the Current Profile Evolution to Facilitate Robust Non-inductive Ramp-up in NSTX-U

    NASA Astrophysics Data System (ADS)

    Wehner, William; Schuster, Eugenio; Poli, Francesca

    2016-10-01

    Initial progress towards the design of non-inductive current ramp-up scenarios in the National Spherical Torus Experiment Upgrade (NSTX-U) has been made through the use of TRANSP predictive simulations. The strategy involves, first, ramping the plasma current with high harmonic fast waves (HHFW) to about 400 kA, and then further ramping to 900 kA with neutral beam injection (NBI). However, the early ramping of neutral beams and application of HHFW leads to an undesirably peaked current profile making the plasma unstable to ballooning modes. We present an optimization-based control approach to improve on the non-inductive ramp-up strategy. We combine the TRANSP code with an optimization algorithm based on sequential quadratic programming to search for time evolutions of the NBI powers, the HHFW powers, and the line averaged density that define an open-loop actuator strategy that maximizes the non-inductive current while satisfying constraints associated with the current profile evolution for MHD stable plasmas. This technique has the potential of playing a critical role in achieving robustly stable non-inductive ramp-up, which will ultimately be necessary to demonstrate applicability of the spherical torus concept to larger devices without sufficient room for a central coil. Supported by the US DOE under the SCGSR Program.

  7. Alteration of terminal heterochromatin and chromosome rearrangements in derivatives of wheat-rye hybrids.

    PubMed

    Fu, Shulan; Lv, Zhenling; Guo, Xiang; Zhang, Xiangqi; Han, Fangpu

    2013-08-20

    Wheat-rye addition and substitution lines and their self progenies revealed variations in telomeric heterochromatin and centromeres. Furthermore, a mitotically unstable dicentric chromosome and stable multicentric chromosomes were observed in the progeny of a Chinese Spring-Imperial rye 3R addition line. An unstable multicentric chromosome was found in the progeny of a 6R/6D substitution line. Drastic variation of terminal heterochromatin including movement and disappearance of terminal heterochromatin occurred in the progeny of wheat-rye addition line 3R, and the 5RS ditelosomic addition line. Highly stable minichromosomes were observed in the progeny of a monosomic 4R addition line, a ditelosomic 5RS addition line and a 6R/6D substitution line. Minichromosomes, with and without the FISH signals for telomeric DNA (TTTAGGG)n, derived from a monosomic 4R addition line are stable and transmissible to the next generation. The results indicated that centromeres and terminal heterochromatin can be profoundly altered in wheat-rye hybrid derivatives. Copyright © 2013. Published by Elsevier Ltd.

  8. Learning Latent Variable and Predictive Models of Dynamical Systems

    DTIC Science & Technology

    2009-10-01

    stable over the full 1000 frame image sequence without significant damping. C. Sam- ples drawn from a least squares synthesized sequences (top), and...LDS stabilizing algorithms, LB-1 and LB-2. Bars at every 20 timesteps denote variance in the results. CG provides the best stable short term predictions...observations. This thesis contributes (1) novel learning algorithms for existing dynamical system models that overcome significant limitations of previous

  9. Design and Calibration of an RF Actuator for Low-Level RF Systems

    NASA Astrophysics Data System (ADS)

    Geng, Zheqiao; Hong, Bo

    2016-02-01

    X-ray free electron laser (FEL) machines like the Linac Coherent Light Source (LCLS) at SLAC require high-quality electron beams to generate X-ray lasers for various experiments. Digital low-level RF (LLRF) systems are widely used to control the high-power RF klystrons to provide a highly stable RF field in accelerator structures for beam acceleration. Feedback and feedforward controllers are implemented in LLRF systems to stabilize or adjust the phase and amplitude of the RF field. To achieve the RF stability and the accuracy of the phase and amplitude adjustment, low-noise and highly linear RF actuators are required. Aiming for the upgrade of the S-band Linac at SLAC, an RF actuator is designed with an I/Qmodulator driven by two digital-to-analog converters (DAC) for the digital LLRF systems. A direct upconversion scheme is selected for RF actuation, and an on-line calibration algorithm is developed to compensate the RF reference leakage and the imbalance errors in the I/Q modulator, which may cause significant phase and amplitude actuation errors. This paper presents the requirements on the RF actuator, the design of the hardware, the calibration algorithm, and the implementation in firmware and software and the test results at LCLS.

  10. A Technique for Measuring Rotocraft Dynamic Stability in the 40 by 80 Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Bohn, J. G.

    1977-01-01

    An on-line technique is described for the measurement of tilt rotor aircraft dynamic stability in the Ames 40- by 80-Foot Wind Tunnel. The technique is based on advanced system identification methodology and uses the instrumental variables approach. It is particulary applicable to real time estimation problems with limited amounts of noise-contaminated data. Several simulations are used to evaluate the algorithm. Estimated natural frequencies and damping ratios are compared with simulation values. The algorithm is also applied to wind tunnel data in an off-line mode. The results are used to develop preliminary guidelines for effective use of the algorithm.

  11. Treatment of Angina: Where Are We?

    PubMed

    Balla, Cristina; Pavasini, Rita; Ferrari, Roberto

    2018-06-06

    Ischaemic heart disease is a major cause of death and disability worldwide, while angina represents its most common symptom. It is estimated that approximately 9 million patients in the USA suffer from angina and its treatment is challenging, thus the strategy to improve the management of chronic stable angina is a priority. Angina might be the result of different pathologies, ranging from the "classical" obstruction of a large coronary artery to alteration of the microcirculation or coronary artery spasm. Current clinical guidelines recommend antianginal therapy to control symptoms, before considering coronary artery revascularization. In the current guidelines, drugs are classified as being first-choice (beta-blockers, calcium channel blockers, and short-acting nitrates) or second-choice (ivabradine, nicorandil, ranolazine, trimetazidine) treatment, with the recommendation to reserve second-line modifications for patients who have contraindications to first-choice agents, do not tolerate them, or remain symptomatic. However, such a categorical approach is currently questioned. In addition, current guidelines provide few suggestions to guide the choice of drugs more suitable according to the underlying pathology or the patient comorbidities. Several other questions have recently emerged, such as: is there evidence-based data between first- and second-line treatments in terms of prognosis or symptom relief? Actually, it seems that newer antianginal drugs, which are classified as second choice, have more evidence-based clinical data that are more contemporary to support their use than what is available for the first-choice drugs. It follows that actual guidelines are based more on tradition than on evidence and there is a need for new algorithms that are more individualized to patients, their comorbidities, and pathophysiological mechanism of chronic stable angina. © 2018 S. Karger AG, Basel.

  12. Research on laser marking speed optimization by using genetic algorithm.

    PubMed

    Wang, Dongyun; Yu, Qiwei; Zhang, Yu

    2015-01-01

    Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%.

  13. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    NASA Technical Reports Server (NTRS)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  14. Capacity improvement using simulation optimization approaches: A case study in the thermotechnology industry

    NASA Astrophysics Data System (ADS)

    Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz

    2015-02-01

    In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.

  15. Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.

    PubMed

    Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A

    2018-02-01

    A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.

  16. Stable reflexive sheaves of degree zero on Calabi-Yau manifolds

    NASA Astrophysics Data System (ADS)

    Nakashima, Tohru

    2017-11-01

    We give sufficient conditions for the existence of μ-stable reflexive sheaves E on a Calabi-Yau threefold such that the first Chern classes c1(E) satisfy c1(E) ṡH2 = 0 for some ample line bundle H. We also prove a result concerning deformations to construct rank two μ-stable sheaves on arbitrary smooth projective varieties.

  17. TaDb: A time-aware diffusion-based recommender algorithm

    NASA Astrophysics Data System (ADS)

    Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan

    2015-02-01

    Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.

  18. A cascade method for TFT-LCD defect detection

    NASA Astrophysics Data System (ADS)

    Yi, Songsong; Wu, Xiaojun; Yu, Zhiyang; Mo, Zhuoya

    2017-07-01

    In this paper, we propose a novel cascade detection algorithm which focuses on point and line defects on TFT-LCD. At the first step of the algorithm, we use the gray level difference of su-bimage to segment the abnormal area. The second step is based on phase only transform (POT) which corresponds to the Discrete Fourier Transform (DFT), normalized by the magnitude. It can remove regularities like texture and noise. After that, we improve the method of setting regions of interest (ROI) with the method of edge segmentation and polar transformation. The algorithm has outstanding performance in both computation speed and accuracy. It can solve most of the defect detections including dark point, light point, dark line, etc.

  19. The coupled response to slope-dependent basal melting

    NASA Astrophysics Data System (ADS)

    Little, C. M.; Goldberg, D. N.; Sergienko, O. V.; Gnanadesikan, A.

    2009-12-01

    Ice shelf basal melting is likely to be strongly controlled by basal slope. If ice shelves steepen in response to intensified melting, it suggests instability in the coupled ice-ocean system. The dynamic response of ice shelves governs what stable morphologies are possible, and thus the influence of melting on buttressing and grounding line migration. Simulations performed using a 3-D ocean model indicate that a simple form of slope-dependent melting is robust under more complex oceanographic conditions. Here we utilize this parameterization to investigate the shape and grounding line evolution of ice shelves, using a shallow-shelf approximation-based model that includes lateral drag. The distribution of melting substantially affects the shape and aspect ratio of unbuttressed ice shelves. Slope-dependent melting thins the ice shelf near the grounding line, reducing velocities throughout the shelf. Sharp ice thickness gradients evolve at high melting rates, yet grounding lines remain static. In foredeepened, buttressed ice shelves, changes in grounding line flux allow two additional options: stable or unstable retreat. Under some conditions, slope-dependent melting results in stable configurations even at high melt rates.

  20. On increasing the spectral efficiency and transmissivity in the data transmission channel on the spacecraft-ground tracking station line

    NASA Astrophysics Data System (ADS)

    Andrianov, M. N.; Kostenko, V. I.; Likhachev, S. F.

    2018-01-01

    The algorithms for achieving a practical increase in the rate of data transmission on the space-craft-ground tracking station line has been considered. This increase is achieved by applying spectral-effective modulation techniques, the technology of orthogonal frequency compression of signals using millimeterrange radio waves. The advantages and disadvantages of each of three algorithms have been revealed. A significant advantage of data transmission in the millimeter range has been indicated.

  1. Lethality of radiation-induced chromosome aberrations in human tumour cell lines with different radiosensitivities.

    PubMed

    Coco-Martin, J M; Ottenheim, C P; Bartelink, H; Begg, A C

    1996-03-01

    In order to find an explanation for the eventual disappearance of all chromosome aberrations in two radiosensitive human tumour cell lines, the type and stability of different aberration types was investigated in more detail. To classify the aberrations into unstable and stable types, three-colour fluorescence in situ hybridization was performed, including a whole-chromosome probe, a pancentromere probe, and a stain for total DNA. This technique enables the appropriate classification of the aberrations principally by the presence (stable) or not (unstable) of a single centromere per chromosome. Unstable-type aberrations were found to disappear within 7 days (several divisions) in the two radiosensitive and the two radioresistant tumour lines investigated. Stable-type aberrations were found to remain at an approximately constant level over the duration of the experiment (14 days; 8-10 divisions) in the two radioresistant lines. In contrast, the majority of these stable-type aberrations had disappeared by 14 days in the two radiosensitive lines. The previous findings of disappearance of total aberrations in radiosensitive cells was therefore not due to a reduced induction of stable-type aberrations, but the complete disappearance of cells with this aberration type. These results could not be explained by differences in apoptosis or G1 blocks. Two possible explanations for these unexpected findings involve non-random induction of unstable-type aberrations, or lethality of stable-type aberrations. The results suggest caution in the use of stable-type aberration numbers as a predictor for radiosensitivity.

  2. Line Thinning Algorithm

    NASA Astrophysics Data System (ADS)

    Feigin, G.; Ben-Yosef, N.

    1983-10-01

    A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.

  3. Therapeutic Inhibitors of LIN28/let-7 Pathway in Ovarian Cancer

    DTIC Science & Technology

    2015-09-01

    generate loss of function lines (siRNA and the CrispR /Cas9 system). Task 4. Determine oncogenic properties associated with TUTase and LIN28B loss in...of-function cell lines to complement our already generated shRNA lines, we are developing handson experience with CrispR /Cas9 technology to...generate stable cell lines where our genes of interest will be inactivated. The advantage of the CrispR /Cas9 method is that stable lines can be

  4. Therapeutic Inhibitors of LIN28/let-7 Pathway in Ovarian Cancer

    DTIC Science & Technology

    2015-09-01

    of-function cell lines to complement our already generated shRNA lines, we are developing handson experience with CrispR /Cas9 technology to...generate stable cell lines where our genes of interest will be inactivated. The advantage of the CrispR /Cas9 method is that stable lines can be...ovarian cancer cell lines using two distinct RNAi methods, shRNA and siRNA. We plan to explore the feasibility of using the CrispR /Cas9 system to

  5. Localization of source with unknown amplitude using IPMC sensor arrays

    NASA Astrophysics Data System (ADS)

    Abdulsadda, Ahmad T.; Zhang, Feitian; Tan, Xiaobo

    2011-04-01

    The lateral line system, consisting of arrays of neuromasts functioning as flow sensors, is an important sensory organ for fish that enables them to detect predators, locate preys, perform rheotaxis, and coordinate schooling. Creating artificial lateral line systems is of significant interest since it will provide a new sensing mechanism for control and coordination of underwater robots and vehicles. In this paper we propose recursive algorithms for localizing a vibrating sphere, also known as a dipole source, based on measurements from an array of flow sensors. A dipole source is frequently used in the study of biological lateral lines, as a surrogate for underwater motion sources such as a flapping fish fin. We first formulate a nonlinear estimation problem based on an analytical model for the dipole-generated flow field. Two algorithms are presented to estimate both the source location and the vibration amplitude, one based on the least squares method and the other based on the Newton-Raphson method. Simulation results show that both methods deliver comparable performance in source localization. A prototype of artificial lateral line system comprising four ionic polymer-metal composite (IPMC) sensors is built, and experimental results are further presented to demonstrate the effectiveness of IPMC lateral line systems and the proposed estimation algorithms.

  6. An unconditionally stable staggered algorithm for transient finite element analysis of coupled thermoelastic problems

    NASA Technical Reports Server (NTRS)

    Farhat, C.; Park, K. C.; Dubois-Pelerin, Y.

    1991-01-01

    An unconditionally stable second order accurate implicit-implicit staggered procedure for the finite element solution of fully coupled thermoelasticity transient problems is proposed. The procedure is stabilized with a semi-algebraic augmentation technique. A comparative cost analysis reveals the superiority of the proposed computational strategy to other conventional staggered procedures. Numerical examples of one and two-dimensional thermomechanical coupled problems demonstrate the accuracy of the proposed numerical solution algorithm.

  7. A quasi-Newton algorithm for large-scale nonlinear equations.

    PubMed

    Huang, Linghua

    2017-01-01

    In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm's initial point does not have any restrictions; (ii) a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length [Formula: see text]. The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the [Formula: see text]-order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  8. Acute multi-sgRNA knockdown of KEOPS complex genes reproduces the microcephaly phenotype of the stable knockout zebrafish model.

    PubMed

    Jobst-Schwan, Tilman; Schmidt, Johanna Magdalena; Schneider, Ronen; Hoogstraten, Charlotte A; Ullmann, Jeremy F P; Schapiro, David; Majmundar, Amar J; Kolb, Amy; Eddy, Kaitlyn; Shril, Shirlee; Braun, Daniela A; Poduri, Annapurna; Hildebrandt, Friedhelm

    2018-01-01

    Until recently, morpholino oligonucleotides have been widely employed in zebrafish as an acute and efficient loss-of-function assay. However, off-target effects and reproducibility issues when compared to stable knockout lines have compromised their further use. Here we employed an acute CRISPR/Cas approach using multiple single guide RNAs targeting simultaneously different positions in two exemplar genes (osgep or tprkb) to increase the likelihood of generating mutations on both alleles in the injected F0 generation and to achieve a similar effect as morpholinos but with the reproducibility of stable lines. This multi single guide RNA approach resulted in median likelihoods for at least one mutation on each allele of >99% and sgRNA specific insertion/deletion profiles as revealed by deep-sequencing. Immunoblot showed a significant reduction for Osgep and Tprkb proteins. For both genes, the acute multi-sgRNA knockout recapitulated the microcephaly phenotype and reduction in survival that we observed previously in stable knockout lines, though milder in the acute multi-sgRNA knockout. Finally, we quantify the degree of mutagenesis by deep sequencing, and provide a mathematical model to quantitate the chance for a biallelic loss-of-function mutation. Our findings can be generalized to acute and stable CRISPR/Cas targeting for any zebrafish gene of interest.

  9. Research on Laser Marking Speed Optimization by Using Genetic Algorithm

    PubMed Central

    Wang, Dongyun; Yu, Qiwei; Zhang, Yu

    2015-01-01

    Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%. PMID:25955831

  10. Research on UAV Intelligent Obstacle Avoidance Technology During Inspection of Transmission Line

    NASA Astrophysics Data System (ADS)

    Wei, Chuanhu; Zhang, Fei; Yin, Chaoyuan; Liu, Yue; Liu, Liang; Li, Zongyu; Wang, Wanguo

    Autonomous obstacle avoidance of unmanned aerial vehicle (hereinafter referred to as UAV) in electric power line inspection process has important significance for operation safety and economy for UAV intelligent inspection system of transmission line as main content of UAV intelligent inspection system on transmission line. In the paper, principles of UAV inspection obstacle avoidance technology of transmission line are introduced. UAV inspection obstacle avoidance technology based on particle swarm global optimization algorithm is proposed after common obstacle avoidance technologies are studied. Stimulation comparison is implemented with traditional UAV inspection obstacle avoidance technology which adopts artificial potential field method. Results show that UAV inspection strategy of particle swarm optimization algorithm, adopted in the paper, is prominently better than UAV inspection strategy of artificial potential field method in the aspects of obstacle avoidance effect and the ability of returning to preset inspection track after passing through the obstacle. An effective method is provided for UAV inspection obstacle avoidance of transmission line.

  11. Effective search for stable segregation configurations at grain boundaries with data-mining techniques

    NASA Astrophysics Data System (ADS)

    Kiyohara, Shin; Mizoguchi, Teruyasu

    2018-03-01

    Grain boundary segregation of dopants plays a crucial role in materials properties. To investigate the dopant segregation behavior at the grain boundary, an enormous number of combinations have to be considered in the segregation of multiple dopants at the complex grain boundary structures. Here, two data mining techniques, the random-forests regression and the genetic algorithm, were applied to determine stable segregation sites at grain boundaries efficiently. Using the random-forests method, a predictive model was constructed from 2% of the segregation configurations and it has been shown that this model could determine the stable segregation configurations. Furthermore, the genetic algorithm also successfully determined the most stable segregation configuration with great efficiency. We demonstrate that these approaches are quite effective to investigate the dopant segregation behaviors at grain boundaries.

  12. On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis.

    PubMed

    Sum, John Pui-Fai; Leung, Chi-Sing; Ho, Kevin I-J

    2012-02-01

    Improving fault tolerance of a neural network has been studied for more than two decades. Various training algorithms have been proposed in sequel. The on-line node fault injection-based algorithm is one of these algorithms, in which hidden nodes randomly output zeros during training. While the idea is simple, theoretical analyses on this algorithm are far from complete. This paper presents its objective function and the convergence proof. We consider three cases for multilayer perceptrons (MLPs). They are: (1) MLPs with single linear output node; (2) MLPs with multiple linear output nodes; and (3) MLPs with single sigmoid output node. For the convergence proof, we show that the algorithm converges with probability one. For the objective function, we show that the corresponding objective functions of cases (1) and (2) are of the same form. They both consist of a mean square errors term, a regularizer term, and a weight decay term. For case (3), the objective function is slight different from that of cases (1) and (2). With the objective functions derived, we can compare the similarities and differences among various algorithms and various cases.

  13. Towards an optimal treatment algorithm for metastatic pancreatic ductal adenocarcinoma (PDA)

    PubMed Central

    Uccello, M.; Moschetta, M.; Mak, G.; Alam, T.; Henriquez, C. Murias; Arkenau, H.-T.

    2018-01-01

    Chemotherapy remains the mainstay of treatment for advanced pancreatic ductal adenocarcinoma (pda). Two randomized trials have demonstrated superiority of the combination regimens folfirinox (5-fluorouracil, leucovorin, oxaliplatin, and irinotecan) and gemcitabine plus nab-paclitaxel over gemcitabine monotherapy as a first-line treatment in adequately fit subjects. Selected pda patients progressing to first-line therapy can receive secondline treatment with moderate clinical benefit. Nevertheless, the optimal algorithm and the role of combination therapy in second-line are still unclear. Published second-line pda clinical trials enrolled patients progressing to gemcitabine-based therapies in use before the approval of nab-paclitaxel and folfirinox. The evolving scenario in second-line may affect the choice of the first-line treatment. For example, nanoliposomal irinotecan plus 5-fluouracil and leucovorin is a novel second-line option which will be suitable only for patients progressing to gemcitabine-based therapy. Therefore, clinical judgement and appropriate patient selection remain key elements in treatment decision. In this review, we aim to illustrate currently available options and define a possible algorithm to guide treatment choice. Future clinical trials taking into account sequential treatment as a new paradigm in pda will help define a standard algorithm. PMID:29507500

  14. Smart Grid Integrity Attacks: Characterizations and Countermeasures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annarita Giani; Eilyan Bitar; Miles McQueen

    2011-10-01

    Real power injections at loads and generators, and real power flows on selected lines in a transmission network are monitored, transmitted over a SCADA network to the system operator, and used in state estimation algorithms to make dispatch, re-balance and other energy management system [EMS] decisions. Coordinated cyber attacks of power meter readings can be arranged to be undetectable by any bad data detection algorithm. These unobservable attacks present a serious threat to grid operations. Of particular interest are sparse attacks that involve the compromise of a modest number of meter readings. An efficient algorithm to find all unobservable attacksmore » [under standard DC load flow approximations] involving the compromise of exactly two power injection meters and an arbitrary number of power meters on lines is presented. This requires O(n2m) flops for a power system with n buses and m line meters. If all lines are metered, there exist canonical forms that characterize all 3, 4, and 5-sparse unobservable attacks. These can be quickly detected in power systems using standard graph algorithms. Known secure phase measurement units [PMUs] can be used as countermeasures against an arbitrary collection of cyber attacks. Finding the minimum number of necessary PMUs is NP-hard. It is shown that p + 1 PMUs at carefully chosen buses are sufficient to neutralize a collection of p cyber attacks.« less

  15. Curved-line search algorithm for ab initio atomic structure relaxation

    NASA Astrophysics Data System (ADS)

    Chen, Zhanghui; Li, Jingbo; Li, Shushen; Wang, Lin-Wang

    2017-09-01

    Ab initio atomic relaxations often take large numbers of steps and long times to converge, especially when the initial atomic configurations are far from the local minimum or there are curved and narrow valleys in the multidimensional potentials. An atomic relaxation method based on on-the-flight force learning and a corresponding curved-line search algorithm is presented to accelerate this process. Results demonstrate the superior performance of this method for metal and magnetic clusters when compared with the conventional conjugate-gradient method.

  16. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  17. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  18. Density-matrix-based algorithm for solving eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Polizzi, Eric

    2009-03-01

    A fast and stable numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques and takes its inspiration from the contour integration and density-matrix representation in quantum mechanics. It will be shown that this algorithm—named FEAST—exhibits high efficiency, robustness, accuracy, and scalability on parallel architectures. Examples from electronic structure calculations of carbon nanotubes are presented, and numerical performances and capabilities are discussed.

  19. [Evoked Potential Blind Extraction Based on Fractional Lower Order Spatial Time-Frequency Matrix].

    PubMed

    Long, Junbo; Wang, Haibin; Zha, Daifeng

    2015-04-01

    The impulsive electroencephalograph (EEG) noises in evoked potential (EP) signals is very strong, usually with a heavy tail and infinite variance characteristics like the acceleration noise impact, hypoxia and etc., as shown in other special tests. The noises can be described by a stable distribution model. In this paper, Wigner-Ville distribution (WVD) and pseudo Wigner-Ville distribution (PWVD) time-frequency distribution based on the fractional lower order moment are presented to be improved. We got fractional lower order WVD (FLO-WVD) and fractional lower order PWVD (FLO-PWVD) time-frequency distribution which could be suitable for a stable distribution process. We also proposed the fractional lower order spatial time-frequency distribution matrix (FLO-STFM) concept. Therefore, combining with time-frequency underdetermined blind source separation (TF-UBSS), we proposed a new fractional lower order spatial time-frequency underdetermined blind source separation (FLO-TF-UBSS) which can work in a stable distribution environment. We used the FLO-TF-UBSS algorithm to extract EPs. Simulations showed that the proposed method could effectively extract EPs in EEG noises, and the separated EPs and EEG signals based on FLO-TF-UBSS were almost the same as the original signal, but blind separation based on TF-UBSS had certain deviation. The correlation coefficient of the FLO-TF-UBSS algorithm was higher than the TF-UBSS algorithm when generalized signal-to-noise ratio (GSNR) changed from 10 dB to 30 dB and a varied from 1. 06 to 1. 94, and was approximately e- qual to 1. Hence, the proposed FLO-TF-UBSS method might be better than the TF-UBSS algorithm based on second order for extracting EP signal under an EEG noise environment.

  20. Robust integration schemes for generalized viscoplasticity with internal-state variables. Part 2: Algorithmic developments and implementation

    NASA Technical Reports Server (NTRS)

    Li, Wei; Saleeb, Atef F.

    1995-01-01

    This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of the report, we focus on the specific details of the numerical schemes, and associated computer algorithms, for the finite-element implementation of GVIPS and NAV models.

  1. A fast, robust algorithm for power line interference cancellation in neural recording.

    PubMed

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. The proposed algorithm features a highly robust operation, fast adaptation to interference variations, significant SNR improvement, low computational complexity and memory requirement and straightforward parameter adjustment. These features render the algorithm suitable for wearable and implantable sensor applications, where reliable and real-time cancellation of the interference is desired.

  2. A fast, robust algorithm for power line interference cancellation in neural recording

    NASA Astrophysics Data System (ADS)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed algorithm features a highly robust operation, fast adaptation to interference variations, significant SNR improvement, low computational complexity and memory requirement and straightforward parameter adjustment. These features render the algorithm suitable for wearable and implantable sensor applications, where reliable and real-time cancellation of the interference is desired.

  3. The detailed characteristics of positive corona current pulses in the line-to-plane electrodes

    NASA Astrophysics Data System (ADS)

    Xuebao, LI; Dayong, LI; Qian, ZHANG; Yinfei, LI; Xiang, CUI; Tiebing, LU

    2018-05-01

    The corona current pulses generated by corona discharge are the sources of the radio interference from transmission lines and the detailed characteristics of the corona current pulses from conductor should be investigated in order to reveal their generation mechanism. In this paper, the line-to-plane electrodes are designed to measure and analyze the characteristics of corona current pulses from positive corona discharges. The influences of inter-electrode gap and line diameters on the detail characteristics of corona current pulses, such as pulse amplitude, rise time, duration time and repetition frequency, are carefully analyzed. The obtained results show that the pulse amplitude and the repetition frequency increase with the diameter of line electrode when the electric fields on the surface of line electrodes are same. With the increase of inter-electrode gap, the pulse amplitude and the repetition frequency first decrease and then turn to be stable, while the rise time first increases and finally turns to be stable. The distributions of electric field and space charges under the line electrodes are calculated, and the influences of inter-electrode gap and line electrode diameter on the experimental results are qualitatively explained.

  4. Interferometric tomography of continuous fields with incomplete projections

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung S.; Sun, Hogwei

    1988-01-01

    Interferometric tomography in the presence of an opaque object is investigated. The developed iterative algorithm does not need to augment the missing information. It is based on the successive reconstruction of the difference field, the difference between the object field to be reconstructed and its estimate, only in the difined region. The application of the algorithm results in stable convergence.

  5. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  6. Ionospheric-thermospheric UV tomography: 1. Image space reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dymond, K. F.; Budzien, S. A.; Hei, M. A.

    2017-03-01

    We present and discuss two algorithms of the class known as Image Space Reconstruction Algorithms (ISRAs) that we are applying to the solution of large-scale ionospheric tomography problems. ISRAs have several desirable features that make them useful for ionospheric tomography. In addition to producing nonnegative solutions, ISRAs are amenable to sparse-matrix formulations and are fast, stable, and robust. We present the results of our studies of two types of ISRA: the Least Squares Positive Definite and the Richardson-Lucy algorithms. We compare their performance to the Multiplicative Algebraic Reconstruction and Conjugate Gradient Least Squares algorithms. We then discuss the use of regularization in these algorithms and present our new approach based on regularization to a partial differential equation.

  7. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  8. Digital codec for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    The authors present the hardware implementation of a digital television bandwidth compression algorithm which processes standard NTSC (National Television Systems Committee) composite color television signals and produces broadcast-quality video in real time at an average of 1.8 b/pixel. The sampling rate used with this algorithm results in 768 samples over the active portion of each video line by 512 active video lines per video frame. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a nonadaptive predictor, nonuniform quantizer, and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The nonadaptive predictor and multilevel Huffman coder combine to set this technique apart from prior-art DPCM encoding algorithms. The authors describe the data compression algorithm and the hardware implementation of the codec and provide performance results.

  9. A Population of Deletion Mutants and an Integrated Mapping and Exome-seq Pipeline for Gene Discovery in Maize

    PubMed Central

    Jia, Shangang; Li, Aixia; Morton, Kyla; Avoles-Kianian, Penny; Kianian, Shahryar F.; Zhang, Chi; Holding, David

    2016-01-01

    To better understand maize endosperm filling and maturation, we used γ-irradiation of the B73 maize reference line to generate mutants with opaque endosperm and reduced kernel fill phenotypes, and created a population of 1788 lines including 39 Mo17 × F2s showing stable, segregating, and viable kernel phenotypes. For molecular characterization of the mutants, we developed a novel functional genomics platform that combined bulked segregant RNA and exome sequencing (BSREx-seq) to map causative mutations and identify candidate genes within mapping intervals. To exemplify the utility of the mutants and provide proof-of-concept for the bioinformatics platform, we present detailed characterization of line 937, an opaque mutant harboring a 6203 bp in-frame deletion covering six exons within the Opaque-1 gene. In addition, we describe mutant line 146 which contains a 4.8 kb intragene deletion within the Sugary-1 gene and line 916 in which an 8.6 kb deletion knocks out a Cyclin A2 gene. The publically available algorithm developed in this work improves the identification of causative deletions and its corresponding gaps within mapping peaks. This study demonstrates the utility of γ-irradiation for forward genetics in large nondense genomes such as maize since deletions often affect single genes. Furthermore, we show how this classical mutagenesis method becomes applicable for functional genomics when combined with state-of-the-art genomics tools. PMID:27261000

  10. Management of stable angina: A commentary on the European Society of Cardiology guidelines.

    PubMed

    Ambrosio, Giuseppe; Mugelli, Alessandro; Lopez-Sendón, José; Tamargo, Juan; Camm, John

    2016-09-01

    In 2013 the European Society of Cardiology (ESC) released new guidelines on the management of stable coronary artery disease. These guidelines update and replace the previous ESC guidelines on the management of stable angina pectoris, issued in 2006. There are several new aspects in the 2013 ESC guidelines compared with the 2006 version. This opinion paper provides an in-depth interpretation of the ESC guidelines with regard to these issues, to help physicians in making evidence-based therapeutic choices in their routine clinical practice. The first new element is the definition of stable coronary artery disease itself, which has now broadened from a 'simple' symptom, angina pectoris, to a more complex disease that can even be asymptomatic. In the first-line setting, the major changes in the new guidelines are the upgrading of calcium channel blockers, the distinction between dihydropyridines and non-dihydropyridine calcium channel blockers, and the presence of important statements regarding the combination of calcium channel blockers with beta-blockers. In the second-line setting, the 2013 ESC guidelines recommend the addition of long-acting nitrates, ivabradine, nicorandil or ranolazine to first-line agents. Trimetazidine may also be considered. However, no clear distinction is made among different second-line drugs, despite different quality of evidence in favour of these agents. For example, the use of ranolazine is supported by strong and recent evidence, while data supporting the use of the traditional agents appear relatively scanty. © The European Society of Cardiology 2016.

  11. Massively parallel algorithms for trace-driven cache simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  12. Real-Time Noise Removal for Line-Scanning Hyperspectral Devices Using a Minimum Noise Fraction-Based Approach

    PubMed Central

    Bjorgan, Asgeir; Randeberg, Lise Lyngsnes

    2015-01-01

    Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717

  13. Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines

    PubMed Central

    Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim

    2008-01-01

    This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately. PMID:27879843

  14. Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines.

    PubMed

    Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim

    2008-04-15

    This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately.

  15. Cohomology of line bundles: Applications

    NASA Astrophysics Data System (ADS)

    Blumenhagen, Ralph; Jurke, Benjamin; Rahn, Thorsten; Roschy, Helmut

    2012-01-01

    Massless modes of both heterotic and Type II string compactifications on compact manifolds are determined by vector bundle valued cohomology classes. Various applications of our recent algorithm for the computation of line bundle valued cohomology classes over toric varieties are presented. For the heterotic string, the prime examples are so-called monad constructions on Calabi-Yau manifolds. In the context of Type II orientifolds, one often needs to compute cohomology for line bundles on finite group action coset spaces, necessitating us to generalize our algorithm to this case. Moreover, we exemplify that the different terms in Batyrev's formula and its generalizations can be given a one-to-one cohomological interpretation. Furthermore, we derive a combinatorial closed form expression for two Hodge numbers of a codimension two Calabi-Yau fourfold.

  16. Acute multi-sgRNA knockdown of KEOPS complex genes reproduces the microcephaly phenotype of the stable knockout zebrafish model

    PubMed Central

    Schneider, Ronen; Hoogstraten, Charlotte A.; Schapiro, David; Majmundar, Amar J.; Kolb, Amy; Eddy, Kaitlyn; Shril, Shirlee; Braun, Daniela A.; Poduri, Annapurna

    2018-01-01

    Until recently, morpholino oligonucleotides have been widely employed in zebrafish as an acute and efficient loss-of-function assay. However, off-target effects and reproducibility issues when compared to stable knockout lines have compromised their further use. Here we employed an acute CRISPR/Cas approach using multiple single guide RNAs targeting simultaneously different positions in two exemplar genes (osgep or tprkb) to increase the likelihood of generating mutations on both alleles in the injected F0 generation and to achieve a similar effect as morpholinos but with the reproducibility of stable lines. This multi single guide RNA approach resulted in median likelihoods for at least one mutation on each allele of >99% and sgRNA specific insertion/deletion profiles as revealed by deep-sequencing. Immunoblot showed a significant reduction for Osgep and Tprkb proteins. For both genes, the acute multi-sgRNA knockout recapitulated the microcephaly phenotype and reduction in survival that we observed previously in stable knockout lines, though milder in the acute multi-sgRNA knockout. Finally, we quantify the degree of mutagenesis by deep sequencing, and provide a mathematical model to quantitate the chance for a biallelic loss-of-function mutation. Our findings can be generalized to acute and stable CRISPR/Cas targeting for any zebrafish gene of interest. PMID:29346415

  17. Evolution with Reinforcement Learning in Negotiation

    PubMed Central

    Zou, Yi; Zhan, Wenjie; Shao, Yuan

    2014-01-01

    Adaptive behavior depends less on the details of the negotiation process and makes more robust predictions in the long term as compared to in the short term. However, the extant literature on population dynamics for behavior adjustment has only examined the current situation. To offset this limitation, we propose a synergy of evolutionary algorithm and reinforcement learning to investigate long-term collective performance and strategy evolution. The model adopts reinforcement learning with a tradeoff between historical and current information to make decisions when the strategies of agents evolve through repeated interactions. The results demonstrate that the strategies in populations converge to stable states, and the agents gradually form steady negotiation habits. Agents that adopt reinforcement learning perform better in payoff, fairness, and stableness than their counterparts using classic evolutionary algorithm. PMID:25048108

  18. Evolution with reinforcement learning in negotiation.

    PubMed

    Zou, Yi; Zhan, Wenjie; Shao, Yuan

    2014-01-01

    Adaptive behavior depends less on the details of the negotiation process and makes more robust predictions in the long term as compared to in the short term. However, the extant literature on population dynamics for behavior adjustment has only examined the current situation. To offset this limitation, we propose a synergy of evolutionary algorithm and reinforcement learning to investigate long-term collective performance and strategy evolution. The model adopts reinforcement learning with a tradeoff between historical and current information to make decisions when the strategies of agents evolve through repeated interactions. The results demonstrate that the strategies in populations converge to stable states, and the agents gradually form steady negotiation habits. Agents that adopt reinforcement learning perform better in payoff, fairness, and stableness than their counterparts using classic evolutionary algorithm.

  19. [Lentivirus-mediated shRNA silencing of LAMP2A inhibits the proliferation of multiple myeloma cells].

    PubMed

    Li, Lixuan; Li, Jia

    2015-05-01

    To study the effects of lentivirus-mediated short hairpin RNA (shRNA) silencing of lysosome-associated membrane protein type 2A (LAMP2A) expression on the proliferation of multiple myeloma cells. The constructed shRNA lentiviral vector was applied to infect human multiple myeloma cell line MM.1S, and stable expression cell line was obtained by puromycin screening. Western blotting was used to verify the inhibitory effect on LAMP2A protein expression. MTT assay was conducted to detect the effect of knocked-down LAMP2A on MM.1S cell proliferation, and the anti-tumor potency of suberoylanilide hydroxamic acid (SAHA) against the obtained MM.1S LAMP2A(shRNA) stable cell line. Lactate assay was performed to observe the impact of low LAMP2A expression on cell glycolysis. The stable cell line with low LAMP2A expression were obtained with the constructed human LAMP2A-shRNA lentiviral vector. Down-regulation of LAMP2A expression significantly inhibited MM.1S cell proliferation and enhanced the anti-tumor activity of SAHA. Interestingly, decreased LAMP2A expression also inhibited MM.1S cell lactic acid secretion. Down-regulation of LAMP2A expression could inhibit cell proliferation in multiple myeloma cells.

  20. Automated system for analyzing the activity of individual neurons

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Johnson, Kenneth O.; Menkes, Alex M.; Diamond, Steve D.; Oshaughnessy, David M.

    1993-01-01

    This paper presents a signal processing system that: (1) provides an efficient and reliable instrument for investigating the activity of neuronal assemblies in the brain; and (2) demonstrates the feasibility of generating the command signals of prostheses using the activity of relevant neurons in disabled subjects. The system operates online, in a fully automated manner and can recognize the transient waveforms of several neurons in extracellular neurophysiological recordings. Optimal algorithms for detection, classification, and resolution of overlapping waveforms are developed and evaluated. Full automation is made possible by an algorithm that can set appropriate decision thresholds and an algorithm that can generate templates on-line. The system is implemented with a fast IBM PC compatible processor board that allows on-line operation.

  1. A stable partitioned FSI algorithm for rigid bodies and incompressible flow. Part II: General formulation

    NASA Astrophysics Data System (ADS)

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi

    2017-08-01

    A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. The numerical scheme is verified on a number of difficult benchmark problems.

  2. A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations

    NASA Astrophysics Data System (ADS)

    Jayaram, V.; Crain, K.; Keller, G. R.

    2011-12-01

    We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element approach on different CPU-GPU system configurations. The algorithm calculates the expected gravity at station locations where the observed gravity and FTG data were acquired. This algorithm can be used for all fast forward model calculations of 3D geologic interpretations for data from airborne, space and submarine gravity, and FTG instrumentation.

  3. Ternary alloy material prediction using genetic algorithm and cluster expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chong

    2015-12-01

    This thesis summarizes our study on the crystal structures prediction of Fe-V-Si system using genetic algorithm and cluster expansion. Our goal is to explore and look for new stable compounds. We started from the current ten known experimental phases, and calculated formation energies of those compounds using density functional theory (DFT) package, namely, VASP. The convex hull was generated based on the DFT calculations of the experimental known phases. Then we did random search on some metal rich (Fe and V) compositions and found that the lowest energy structures were body centered cube (bcc) underlying lattice, under which we didmore » our computational systematic searches using genetic algorithm and cluster expansion. Among hundreds of the searched compositions, thirteen were selected and DFT formation energies were obtained by VASP. The stability checking of those thirteen compounds was done in reference to the experimental convex hull. We found that the composition, 24-8-16, i.e., Fe 3VSi 2 is a new stable phase and it can be very inspiring to the future experiments.« less

  4. Identification of stable areas in unreferenced laser scans for automated geomorphometric monitoring

    NASA Astrophysics Data System (ADS)

    Wujanz, Daniel; Avian, Michael; Krueger, Daniel; Neitzel, Frank

    2018-04-01

    Current research questions in the field of geomorphology focus on the impact of climate change on several processes subsequently causing natural hazards. Geodetic deformation measurements are a suitable tool to document such geomorphic mechanisms, e.g. by capturing a region of interest with terrestrial laser scanners which results in a so-called 3-D point cloud. The main problem in deformation monitoring is the transformation of 3-D point clouds captured at different points in time (epochs) into a stable reference coordinate system. In this contribution, a surface-based registration methodology is applied, termed the iterative closest proximity algorithm (ICProx), that solely uses point cloud data as input, similar to the iterative closest point algorithm (ICP). The aim of this study is to automatically classify deformations that occurred at a rock glacier and an ice glacier, as well as in a rockfall area. For every case study, two epochs were processed, while the datasets notably differ in terms of geometric characteristics, distribution and magnitude of deformation. In summary, the ICProx algorithm's classification accuracy is 70 % on average in comparison to reference data.

  5. Adaptive optics retinal imaging with automatic detection of the pupil and its boundary in real time using Shack-Hartmann images.

    PubMed

    de Castro, Alberto; Sawides, Lucie; Qi, Xiaofeng; Burns, Stephen A

    2017-08-20

    Retinal imaging with an adaptive optics (AO) system usually requires that the eye be centered and stable relative to the exit pupil of the system. Aberrations are then typically corrected inside a fixed circular pupil. This approach can be restrictive when imaging some subjects, since the pupil may not be round and maintaining a stable head position can be difficult. In this paper, we present an automatic algorithm that relaxes these constraints. An image quality metric is computed for each spot of the Shack-Hartmann image to detect the pupil and its boundary, and the control algorithm is applied only to regions within the subject's pupil. Images on a model eye as well as for five subjects were obtained to show that a system exit pupil larger than the subject's eye pupil could be used for AO retinal imaging without a reduction in image quality. This algorithm automates the task of selecting pupil size. It also may relax constraints on centering the subject's pupil and on the shape of the pupil.

  6. Matrix form for the instrument line shape of Fourier-transform spectrometers yielding a fast integration algorithm to theoretical spectra.

    PubMed

    Desbiens, Raphaël; Tremblay, Pierre; Genest, Jérôme; Bouchard, Jean-Pierre

    2006-01-20

    The instrument line shape (ILS) of a Fourier-transform spectrometer is expressed in a matrix form. For all line shape effects that scale with wavenumber, the ILS matrix is shown to be transposed in the spectral and interferogram domains. The novel representation of the ILS matrix in the interferogram domain yields an insightful physical interpretation of the underlying process producing self-apodization. Working in the interferogram domain circumvents the problem of taking into account the effects of finite optical path difference and permits a proper discretization of the equations. A fast algorithm in O(N log2 N), based on the fractional Fourier transform, is introduced that permits the application of a constant resolving power line shape to theoretical spectra or forward models. The ILS integration formalism is validated with experimental data.

  7. [Clinical applications of dosing algorithm in the predication of warfarin maintenance dose].

    PubMed

    Huang, Sheng-wen; Xiang, Dao-kang; An, Bang-quan; Li, Gui-fang; Huang, Ling; Wu, Hai-li

    2011-12-27

    To evaluate the feasibility of clinical application for genetic based dosing algorithm in the predication of warfarin maintenance dose in Chinese population. The clinical data were collected and blood samples harvested from a total of 126 patients undergoing heart valve replacement. The genotypes of VKORC1 and CYP2C9 were determined by melting curve analysis after PCR. They were divided randomly into the study and control groups. In the study group, the first three doses of warfarin were prescribed according to the predicted warfarin maintenance dose while warfarin was initiated at 2.5 mg/d in the control group. The warfarin doses were adjusted according to the measured international normalized ratio (INR) values. And all subjects were followed for 50 days after an initiation of warfarin therapy. At the end of a 50-day follow-up period, the proportions of the patients on a stable dose were 82.4% (42/51) and 62.5% (30/48) for the study and control groups respectively. The mean durations of reaching a stable dose of warfarin were (27.5 ± 1.8) and (34.7 ± 1.8) days and the median durations were (24.0 ± 1.7) and (33.0 ± 4.5) days in the study and control groups respectively. Significant differences existed in the durations of reaching a stable dose between the two groups (P = 0.012). Compared with the control group, the hazard ratio (HR) for the duration of reaching a stable dose was 1.786 in the study group (95%CI 1.088 - 2.875, P = 0.026). The predicted dosing algorithm incorporating genetic and non-genetic factors may shorten the duration of achieving efficiently a stable dose of warfarin. And the present study validates the feasibility of its clinical application.

  8. A fast hidden line algorithm with contour option. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Thue, R. E.

    1984-01-01

    The JonesD algorithm was modified to allow the processing of N-sided elements and implemented in conjunction with a 3-D contour generation algorithm. The total hidden line and contour subsystem is implemented in the MOVIE.BYU Display package, and is compared to the subsystems already existing in the MOVIE.BYU package. The comparison reveals that the modified JonesD hidden line and contour subsystem yields substantial processing time savings, when processing moderate sized models comprised of 1000 elements or less. There are, however, some limitations to the modified JonesD subsystem.

  9. [Tumor cells transfer between the patient and laboratory animal as a basic methodological approach to the study of cancerogenesis and identification of biomarkers].

    PubMed

    Klos, D; Stašek, M; Loveček, M; Skalický, P; Vrba, R; Aujeský, R; Havlík, R; Neoral, Č; Varanashi, L; Hajdúch, M; Vrbková, J; Džubák, P

    The investigation of prognostic and predictive factors for early diagnosis of tumors, their surveillance and monitoring of the impact of therapeutic modalities using hybrid laboratory models in vitro/in vivo is an experimental approach with a significant potential. It is preconditioned by the preparation of in vivo tumor models, which may face a number of potential technical difficulties. The assessment of technical success of grafting and xenotransplantation based on the type of the tumor or cell line is important for the preparation of these models and their further use for proteomic and genomic analyses. Surgically harvested gastrointestinal tract tumor tissue was processed or stable cancer cell lines were cultivated; the viability was assessed, and subsequently the cells were inoculated subcutaneously to SCID mice with an individual duration of tumor growth, followed by its extraction. We analysed 140 specimens of tumor tissue including 17 specimens of esophageal cancer (viability 13/successful inoculations 0), 13 tumors of the cardia (11/0), 39 gastric tumors (24/4), 47 pancreatic tumors (34/1) and 24 specimens of colorectal cancer (22/9). 3 specimens were excluded due to histological absence of the tumor (complete remission after neoadjuvant therapy in 2 cases of esophageal carcinoma, 1 case of chronic pancreatitis). We observed successful inoculation in 17 of 28 tumor cell lines. The probability of successful grafting to the mice model in tumors of the esophagus, stomach and pancreas is significantly lower in comparison with colorectal carcinoma and cell lines generated tumors. The success rate is enhanced upon preservation of viability of the harvested tumor tissue, which depends on the sequence of clinical and laboratory algorithms with a high level of cooperation.Key words: proteomic analysis - xenotransplantation - prognostic and predictive factors - gastrointestinal tract tumors.

  10. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  11. Finite-element time-domain algorithms for modeling linear Debye and Lorentz dielectric dispersions at low frequencies.

    PubMed

    Stoykov, Nikolay S; Kuiken, Todd A; Lowery, Madeleine M; Taflove, Allen

    2003-09-01

    We present what we believe to be the first algorithms that use a simple scalar-potential formulation to model linear Debye and Lorentz dielectric dispersions at low frequencies in the context of finite-element time-domain (FETD) numerical solutions of electric potential. The new algorithms, which permit treatment of multiple-pole dielectric relaxations, are based on the auxiliary differential equation method and are unconditionally stable. We validate the algorithms by comparison with the results of a previously reported method based on the Fourier transform. The new algorithms should be useful in calculating the transient response of biological materials subject to impulsive excitation. Potential applications include FETD modeling of electromyography, functional electrical stimulation, defibrillation, and effects of lightning and impulsive electric shock.

  12. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  13. A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds

    PubMed Central

    Poreba, Martyna; Goulette, François

    2015-01-01

    With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589

  14. Automated extraction of chemical structure information from digital raster images

    PubMed Central

    Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro

    2009-01-01

    Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research articles. PMID:19196483

  15. Deep learning improves prediction of CRISPR-Cpf1 guide RNA activity.

    PubMed

    Kim, Hui Kwon; Min, Seonwoo; Song, Myungjae; Jung, Soobin; Choi, Jae Woo; Kim, Younggwang; Lee, Sangeun; Yoon, Sungroh; Kim, Hyongbum Henry

    2018-03-01

    We present two algorithms to predict the activity of AsCpf1 guide RNAs. Indel frequencies for 15,000 target sequences were used in a deep-learning framework based on a convolutional neural network to train Seq-deepCpf1. We then incorporated chromatin accessibility information to create the better-performing DeepCpf1 algorithm for cell lines for which such information is available and show that both algorithms outperform previous machine learning algorithms on our own and published data sets.

  16. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images.

    PubMed

    Fan, Chong; Chen, Xushuai; Zhong, Lei; Zhou, Min; Shi, Yun; Duan, Yulin

    2017-03-18

    A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.

  17. Uniformly stable backpropagation algorithm to train a feedforward neural network.

    PubMed

    Rubio, José de Jesús; Angelov, Plamen; Pacheco, Jaime

    2011-03-01

    Neural networks (NNs) have numerous applications to online processes, but the problem of stability is rarely discussed. This is an extremely important issue because, if the stability of a solution is not guaranteed, the equipment that is being used can be damaged, which can also cause serious accidents. It is true that in some research papers this problem has been considered, but this concerns continuous-time NN only. At the same time, there are many systems that are better described in the discrete time domain such as population of animals, the annual expenses in an industry, the interest earned by a bank, or the prediction of the distribution of loads stored every hour in a warehouse. Therefore, it is of paramount importance to consider the stability of the discrete-time NN. This paper makes several important contributions. 1) A theorem is stated and proven which guarantees uniform stability of a general discrete-time system. 2) It is proven that the backpropagation (BP) algorithm with a new time-varying rate is uniformly stable for online identification and the identification error converges to a small zone bounded by the uncertainty. 3) It is proven that the weights' error is bounded by the initial weights' error, i.e., overfitting is eliminated in the proposed algorithm. 4) The BP algorithm is applied to predict the distribution of loads that a transelevator receives from a trailer and places in the deposits in a warehouse every hour, so that the deposits in the warehouse are reserved in advance using the prediction results. 5) The BP algorithm is compared with the recursive least square (RLS) algorithm and with the Takagi-Sugeno type fuzzy inference system in the problem of predicting the distribution of loads in a warehouse, giving that the first and the second are stable and the third is unstable. 6) The BP algorithm is compared with the RLS algorithm and with the Kalman filter algorithm in a synthetic example.

  18. On-line determination of pork color and intramuscular fat by computer vision

    NASA Astrophysics Data System (ADS)

    Liao, Yi-Tao; Fan, Yu-Xia; Wu, Xue-Qian; Xie, Li-juan; Cheng, Fang

    2010-04-01

    In this study, the application potential of computer vision in on-line determination of CIE L*a*b* and content of intramuscular fat (IMF) of pork was evaluated. Images of pork chop from 211 pig carcasses were captured while samples were on a conveyor belt at the speed of 0.25 m•s-1 to simulate the on-line environment. CIE L*a*b* and IMF content were measured with colorimeter and chemical extractor as reference. The KSW algorithm combined with region selection was employed in eliminating the surrounding fat of longissimus dorsi muscle (MLD). RGB values of the pork were counted and five methods were applied for transforming RGB values to CIE L*a*b* values. The region growing algorithm with multiple seed points was applied to mask out the IMF pixels within the intensity corrected images. The performances of the proposed algorithms were verified by comparing the measured reference values and the quality characteristics obtained by image processing. MLD region of six samples could not be identified using the KSW algorithm. Intensity nonuniformity of pork surface in the image can be eliminated efficiently, and IMF region of three corrected images failed to be extracted. Given considerable variety of color and complexity of the pork surface, CIE L*, a* and b* color of MLD could be predicted with correlation coefficients of 0.84, 0.54 and 0.47 respectively, and IMF content could be determined with a correlation coefficient more than 0.70. The study demonstrated that it is feasible to evaluate CIE L*a*b* values and IMF content on-line using computer vision.

  19. Stable 293 T and CHO cell lines expressing cleaved, stable HIV-1 envelope glycoprotein trimers for structural and vaccine studies.

    PubMed

    Chung, Nancy P Y; Matthews, Katie; Kim, Helen J; Ketas, Thomas J; Golabek, Michael; de Los Reyes, Kevin; Korzun, Jacob; Yasmeen, Anila; Sanders, Rogier W; Klasse, Per Johan; Wilson, Ian A; Ward, Andrew B; Marozsan, Andre J; Moore, John P; Cupo, Albert

    2014-04-25

    Recombinant soluble, cleaved HIV-1 envelope glycoprotein SOSIP.664 gp140 trimers based on the subtype A BG505 sequence are being studied structurally and tested as immunogens in animals. For these trimers to become a vaccine candidate for human trials, they would need to be made in appropriate amounts at an acceptable quality. Accomplishing such tasks by transient transfection is likely to be challenging. The traditional way to express recombinant proteins in large amounts is via a permanent cell line, usually of mammalian origin. Making cell lines that produce BG505 SOSIP.664 trimers requires the co-expression of the Furin protease to ensure that the cleavage site between the gp120 and gp41 subunits is fully utilized. We designed a vector capable of expressing Env and Furin, and used it to create Stable 293 T and CHO Flp-In™ cell lines through site-specific recombination. Both lines produce high quality, cleaved trimers at yields of up to 12-15 mg per 1 × 109 cells. Trimer expression at such levels was maintained for up to 30 days (10 passages) after initial seeding and was consistently superior to what could be achieved by transient transfection. Electron microscopy studies confirm that the purified trimers have the same native-like appearance as those derived by transient transfection and used to generate high-resolution structures. They also have appropriate antigenic properties, including the presentation of the quaternary epitope for the broadly neutralizing antibody PGT145. The BG505 SOSIP.664 trimer-expressing cell lines yield proteins of an appropriate quality for structural studies and animal immunogenicity experiments. The methodology is suitable for making similar lines under Good Manufacturing Practice conditions, to produce trimers for human clinical trials. Moreover, any env gene can be incorporated into this vector system, allowing the manufacture of SOSIP trimers from multiple genotypes, either by transient transfection or from stable cell lines.

  20. Transfer and distortion of atmospheric information in the satellite temperature retrieval problem

    NASA Technical Reports Server (NTRS)

    Thompson, O. E.

    1981-01-01

    A systematic approach to investigating the transfer of basic ambient temperature information and its distortion by satellite systems and subsequent analysis algorithms is discussed. The retrieval analysis cycle is derived, the variance spectrum of information is examined as it takes different forms in that process, and the quality and quantity of information existing at each stop is compared with the initial ambient temperature information. Temperature retrieval algorithms can smooth, add, or further distort information, depending on how stable the algorithm is, and how heavily influenced by a priori data.

  1. A mutation in the mitochondrial protein UQCRB promotes angiogenesis through the generation of mitochondrial reactive oxygen species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Junghwa; Jung, Hye Jin; Jeong, Seung Hun

    2014-12-12

    Highlights: • We constructed mitochondrial protein UQCRB mutant stable cell lines on the basis of a human case report. • These mutant cell lines exhibit pro-angiogenic activity with enhanced VEGF expression. • Proliferation of mutant cell lines was regulated by UQCRB inhibitors. • UQCRB may have a functional role in angiogenesis. - Abstract: Ubiquinol-cytochrome c reductase binding protein (UQCRB) is one of the subunits of mitochondrial complex III and is a target protein of the natural anti-angiogenic small molecule terpestacin. Previously, the biological role of UQCRB was thought to be limited to the maintenance of complex III. However, the identificationmore » and validation of UQCRB as a target protein of terpestacin enabled the role of UQCRB in oxygen sensing and angiogenesis to be elucidated. To explore the biological role of this protein further, UQCRB mutant stable cell lines were generated on the basis of a human case report. We demonstrated that these cell lines exhibited glycolytic and pro-angiogenic activities via mitochondrial reactive oxygen species (mROS)-mediated HIF1 signal transduction. Furthermore, a morphological abnormality in mitochondria was detected in UQCRB mutant stable cell lines. In addition, the proliferative effect of the UQCRB mutants was significantly regulated by the UQCRB inhibitors terpestacin and A1938. Collectively, these results provide a molecular basis for UQCRB-related biological processes and reveal potential key roles of UQCRB in angiogenesis and mitochondria-mediated metabolic disorders.« less

  2. Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm.

    PubMed

    He, Xiaoqi; Zheng, Zizhao; Hu, Chao

    2015-01-01

    The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg-Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher "denoise" capacity, with a larger range for initial guess values.

  3. Finite element solution for energy conservation using a highly stable explicit integration algorithm

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Manhardt, P. D.

    1972-01-01

    Theoretical derivation of a finite element solution algorithm for the transient energy conservation equation in multidimensional, stationary multi-media continua with irregular solution domain closure is considered. The complete finite element matrix forms for arbitrarily irregular discretizations are established, using natural coordinate function representations. The algorithm is embodied into a user-oriented computer program (COMOC) which obtains transient temperature distributions at the node points of the finite element discretization using a highly stable explicit integration procedure with automatic error control features. The finite element algorithm is shown to posses convergence with discretization for a transient sample problem. The condensed form for the specific heat element matrix is shown to be preferable to the consistent form. Computed results for diverse problems illustrate the versatility of COMOC, and easily prepared output subroutines are shown to allow quick engineering assessment of solution behavior.

  4. Application of sensitivity-analysis techniques to the calculation of topological quantities

    NASA Astrophysics Data System (ADS)

    Gilchrist, Stuart

    2017-08-01

    Magnetic reconnection in the corona occurs preferentially at sites where the magnetic connectivity is either discontinuous or has a large spatial gradient. Hence there is a general interest in computing quantities (like the squashing factor) that characterize the gradient in the field-line mapping function. Here we present an algorithm for calculating certain (quasi)topological quantities using mathematical techniques from the field of ``sensitivity-analysis''. The method is based on the calculation of a three dimensional field-line mapping Jacobian from which all the present topological quantities of interest can be derived. We will present the algorithm and the details of a publicly available set of libraries that implement the algorithm.

  5. A review on simple assembly line balancing type-e problem

    NASA Astrophysics Data System (ADS)

    Jusop, M.; Rashid, M. F. F. Ab

    2015-12-01

    Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.

  6. An Algorithm for Finding Candidate Synaptic Sites in Computer Generated Networks of Neurons with Realistic Morphologies

    PubMed Central

    van Pelt, Jaap; Carnell, Andrew; de Ridder, Sander; Mansvelder, Huibert D.; van Ooyen, Arjen

    2010-01-01

    Neurons make synaptic connections at locations where axons and dendrites are sufficiently close in space. Typically the required proximity is based on the dimensions of dendritic spines and axonal boutons. Based on this principle one can search those locations in networks formed by reconstructed neurons or computer generated neurons. Candidate synapses are then located where axons and dendrites are within a given criterion distance from each other. Both experimentally reconstructed and model generated neurons are usually represented morphologically by piecewise-linear structures (line pieces or cylinders). Proximity tests are then performed on all pairs of line pieces from both axonal and dendritic branches. Applying just a test on the distance between line pieces may result in local clusters of synaptic sites when more than one pair of nearby line pieces from axonal and dendritic branches is sufficient close, and may introduce a dependency on the length scale of the individual line pieces. The present paper describes a new algorithm for defining locations of candidate synapses which is based on the crossing requirement of a line piece pair, while the length of the orthogonal distance between the line pieces is subjected to the distance criterion for testing 3D proximity. PMID:21160548

  7. Treatment for stable HIV patients in England: can we increase efficiency and improve patient care?

    PubMed

    Adams, Elisabeth; Ogden, David; Ehrlich, Alice; Hay, Phillip

    2014-07-01

    To estimate the costs and potential efficiency gains of changing the frequency of clinic appointments and drug dispensing arrangements for stable HIV patients compared to the costs of hospital pharmacy dispensing and home delivery. We estimated the annual costs per patient (HIV clinic visits and either first-line treatment or a common second-line regimen, with some patients switching to a second-line regimen during the year). The cost of three-, four- and six-monthly clinic appointments and drug supply was estimated assuming hospital dispensing (incurring value-added tax) and home delivery. Three-monthly appointments and hospital drug dispensing (baseline) were compared to other strategies. The baseline was the most costly option (£10,587 if first-line treatment and no switch to second-line regimen). Moving to six-monthly appointments and home delivery yielded savings of £1883 per patient annually. Assuming patients start on different regimens and may switch to second-line therapies, six-monthly appointments and three-monthly home delivery of drugs is the least expensive option and could result in nearly £2000 savings per patient. This translates to annual cost reduction of about £8 million for the estimated 4000 eligible patients not currently on home delivery in London, England. Different appointment schedules and drug supply options should be considered for stable HIV patients based on efficiency gains. However, this should be assessed for individual patients to meet their needs, especially around adherence and patient support. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Attitude identification for SCOLE using two infrared cameras

    NASA Technical Reports Server (NTRS)

    Shenhar, Joram

    1991-01-01

    An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.

  9. Preparation of Proper Immunogen by Cloning and Stable Expression of cDNA coding for Human Hematopoietic Stem Cell Marker CD34 in NIH-3T3 Mouse Fibroblast Cell Line

    PubMed Central

    Shafaghat, Farzaneh; Abbasi-Kenarsari, Hajar; Majidi, Jafar; Movassaghpour, Ali Akbar; Shanehbandi, Dariush; Kazemi, Tohid

    2015-01-01

    Purpose: Transmembrane CD34 glycoprotein is the most important marker for identification, isolation and enumeration of hematopoietic stem cells (HSCs). We aimed in this study to clone the cDNA coding for human CD34 from KG1a cell line and stably express in mouse fibroblast cell line NIH-3T3. Such artificial cell line could be useful as proper immunogen for production of mouse monoclonal antibodies. Methods: CD34 cDNA was cloned from KG1a cell line after total RNA extraction and cDNA synthesis. Pfu DNA polymerase-amplified specific band was ligated to pGEMT-easy TA-cloning vector and sub-cloned in pCMV6-Neo expression vector. After transfection of NIH-3T3 cells using 3 μg of recombinant construct and 6 μl of JetPEI transfection reagent, stable expression was obtained by selection of cells by G418 antibiotic and confirmed by surface flow cytometry. Results: 1158 bp specific band was aligned completely to reference sequence in NCBI database corresponding to long isoform of human CD34. Transient and stable expression of human CD34 on transfected NIH-3T3 mouse fibroblast cells was achieved (25% and 95%, respectively) as shown by flow cytometry. Conclusion: Cloning and stable expression of human CD34 cDNA was successfully performed and validated by standard flow cytometric analysis. Due to murine origin of NIH-3T3 cell line, CD34-expressing NIH-3T3 cells could be useful as immunogen in production of diagnostic monoclonal antibodies against human CD34. This approach could bypass the need for purification of recombinant proteins produced in eukaryotic expression systems. PMID:25789221

  10. Information-based management mode based on value network analysis for livestock enterprises

    NASA Astrophysics Data System (ADS)

    Liu, Haoqi; Lee, Changhoon; Han, Mingming; Su, Zhongbin; Padigala, Varshinee Anu; Shen, Weizheng

    2018-01-01

    With the development of computer and IT technologies, enterprise management has gradually become information-based management. Moreover, due to poor technical competence and non-uniform management, most breeding enterprises show a lack of organisation in data collection and management. In addition, low levels of efficiency result in increasing production costs. This paper adopts 'struts2' in order to construct an information-based management system for standardised and normalised management within the process of production in beef cattle breeding enterprises. We present a radio-frequency identification system by studying multiple-tag anti-collision via a dynamic grouping ALOHA algorithm. This algorithm is based on the existing ALOHA algorithm and uses an improved packet dynamic of this algorithm, which is characterised by a high-throughput rate. This new algorithm can reach a throughput 42% higher than that of the general ALOHA algorithm. With a change in the number of tags, the system throughput is relatively stable.

  11. Graph theoretical stable allocation as a tool for reproduction of control by human operators

    NASA Astrophysics Data System (ADS)

    van Nooijen, Ronald; Ertsen, Maurits; Kolechkina, Alla

    2016-04-01

    During the design of central control algorithms for existing water resource systems under manual control it is important to consider the interaction with parts of the system that remain under manual control and to compare the proposed new system with the existing manual methods. In graph theory the "stable allocation" problem has good solution algorithms and allows for formulation of flow distribution problems in terms of priorities. As a test case for the use of this approach we used the algorithm to derive water allocation rules for the Gezira Scheme, an irrigation system located between the Blue and White Niles south of Khartoum. In 1925, Gezira started with 300,000 acres; currently it covers close to two million acres.

  12. Block LU factorization

    NASA Technical Reports Server (NTRS)

    Demmel, James W.; Higham, Nicholas J.; Schreiber, Robert S.

    1992-01-01

    Many of the currently popular 'block algorithms' are scalar algorithms in which the operations have been grouped and reordered into matrix operations. One genuine block algorithm in practical use is block LU factorization, and this has recently been shown by Demmel and Higham to be unstable in general. It is shown here that block LU factorization is stable if A is block diagonally dominant by columns. Moreover, for a general matrix the level of instability in block LU factorization can be founded in terms of the condition number kappa(A) and the growth factor for Gaussian elimination without pivoting. A consequence is that block LU factorization is stable for a matrix A that is symmetric positive definite or point diagonally dominant by rows or columns as long as A is well-conditioned.

  13. A Composite Algorithm for Mixed Integer Constrained Nonlinear Optimization.

    DTIC Science & Technology

    1980-01-01

    de Silva [141, and Weisman and Wood [76). A particular direct search algorithm, the simplex method, has been cited for having the potential for...spaced discrete points on a line which makes the direction suitable for an efficient integer search technique based on Fibonacci numbers. Two...defined by a subset of variables. The complex algorithm is particularly well suited for this subspace search for two reasons. First, the complex method

  14. Terrestrial Planet Finder cryogenic delay line development

    NASA Technical Reports Server (NTRS)

    Smythe, Robert F.; Swain, Mark R.; Alvarez-Salazar, Oscar; Moore, James D.

    2004-01-01

    Delay lines provide the path-length compensation that makes the measurement of interference fringes possible. When used for nulling interferometry, the delay line must control path-lengths so that the null is stable and controlled throughout the measurement. We report on a low noise, low disturbance, and high bandwidth optical delay line capable of meeting the TPF interferometer optical path length control requirements at cryogenic temperatures.

  15. Stable cellular models of nuclear receptor PXR for high-throughput evaluation of small molecules.

    PubMed

    Negi, Seema; Singh, Shashi Kala; Kumar, Sanjay; Kumar, Subodh; Tyagi, Rakesh K

    2018-06-19

    Pregnane & Xenobiotic Receptor (PXR) is one of the 48 members of the ligand-modulated transcription factors belonging to nuclear receptor superfamily. Though PXR is now well-established as a 'xenosensor', regulating the central detoxification and drug metabolizing machinery, it has also emerged as a key player in several metabolic disorders. This makes PXR attractive to both, researchers and pharmaceutical industry since clinical success of small drug molecules can be pre-evaluated on PXR platform. At the early stages of drug discovery, cell-based assays are used for high-throughput screening of small molecules. The future success or failure of a drug can be predicted by this approach saving expensive resources and time. In view of this, we have developed human liver cell line-based, dual-level screening and validation protocol on PXR platform having application to assess small molecules. We have generated two different stably transfected cell lines, (i) a stable promoter-reporter cell line (HepXREM) expressing PXR and a commonly used CYP3A4 promoter-reporter i.e. XREM-luciferase; and (ii) two stable cell lines integrated with proximal PXR-promoter-reporter (Hepx-1096/+43 and Hepx-497/+43). Employing HepXREM, Hepx-1096/+43 and Hepx-497/+43 stable cell lines > 25 anti-cancer herbal drug ingredients were screened for examining their modulatory effects on a) PXR transcriptional activity and, b) PXR-promoter activity. In conclusion, the present report provides a convenient and economical, dual-level screening system to facilitate the identification of superior therapeutic small molecules. Copyright © 2018. Published by Elsevier Ltd.

  16. Online signature recognition using principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Hwang, Seung-Jun; Park, Seung-Je; Baek, Joong-Hwan

    2016-12-01

    In this paper, we propose an algorithm for on-line signature recognition using fingertip point in the air from the depth image acquired by Kinect. We extract 10 statistical features from X, Y, Z axis, which are invariant to changes in shifting and scaling of the signature trajectories in three-dimensional space. Artificial neural network is adopted to solve the complex signature classification problem. 30 dimensional features are converted into 10 principal components using principal component analysis, which is 99.02% of total variances. We implement the proposed algorithm and test to actual on-line signatures. In experiment, we verify the proposed method is successful to classify 15 different on-line signatures. Experimental result shows 98.47% of recognition rate when using only 10 feature vectors.

  17. Measurement of pattern roughness and local size variation using CD-SEM: current status

    NASA Astrophysics Data System (ADS)

    Fukuda, Hiroshi; Kawasaki, Takahiro; Kawada, Hiroki; Sakai, Kei; Kato, Takashi; Yamaguchi, Satoru; Ikota, Masami; Momonoi, Yoshinori

    2018-03-01

    Measurement of line edge roughness (LER) is discussed from four aspects: edge detection, PSD prediction, sampling strategy, and noise mitigation, and general guidelines and practical solutions for LER measurement today are introduced. Advanced edge detection algorithms such as wave-matching method are shown effective for robustly detecting edges from low SNR images, while conventional algorithm with weak filtering is still effective in suppressing SEM noise and aliasing. Advanced PSD prediction method such as multi-taper method is effective in suppressing sampling noise within a line edge to analyze, while number of lines is still required for suppressing line to line variation. Two types of SEM noise mitigation methods, "apparent noise floor" subtraction method and LER-noise decomposition using regression analysis are verified to successfully mitigate SEM noise from PSD curves. These results are extended to LCDU measurement to clarify the impact of SEM noise and sampling noise on LCDU.

  18. Double regions growing algorithm for automated satellite image mosaicking

    NASA Astrophysics Data System (ADS)

    Tan, Yihua; Chen, Chen; Tian, Jinwen

    2011-12-01

    Feathering is a most widely used method in seamless satellite image mosaicking. A simple but effective algorithm - double regions growing (DRG) algorithm, which utilizes the shape content of images' valid regions, is proposed for generating robust feathering-line before feathering. It works without any human intervention, and experiment on real satellite images shows the advantages of the proposed method.

  19. LAWS simulation: Sampling strategies and wind computation algorithms

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D. A.; Wood, S. A.; Houston, S. H.

    1989-01-01

    In general, work has continued on developing and evaluating algorithms designed to manage the Laser Atmospheric Wind Sounder (LAWS) lidar pulses and to compute the horizontal wind vectors from the line-of-sight (LOS) measurements. These efforts fall into three categories: Improvements to the shot management and multi-pair algorithms (SMA/MPA); observing system simulation experiments; and ground-based simulations of LAWS.

  20. Glycoprotein production for structure analysis with stable, glycosylation mutant CHO cell lines established by fluorescence-activated cell sorting.

    PubMed

    Wilke, Sonja; Krausze, Joern; Gossen, Manfred; Groebe, Lothar; Jäger, Volker; Gherardi, Ermanno; van den Heuvel, Joop; Büssow, Konrad

    2010-06-01

    Stable mammalian cell lines are excellent tools for the expression of secreted and membrane glycoproteins. However, structural analysis of these molecules is generally hampered by the complexity of N-linked carbohydrate side chains. Cell lines with mutations are available that result in shorter and more homogenous carbohydrate chains. Here, we use preparative fluorescence-activated cell sorting (FACS) and site-specific gene excision to establish high-yield glycoprotein expression for structural studies with stable clones derived from the well-established Lec3.2.8.1 glycosylation mutant of the Chinese hamster ovary (CHO) cell line. We exemplify the strategy by describing novel clones expressing single-chain hepatocyte growth factor/scatter factor (HGF/SF, a secreted glycoprotein) and a domain of lysosome-associated membrane protein 3 (LAMP3d). In both cases, stable GFP-expressing cell lines were established by transfection with a genetic construct including a GFP marker and two rounds of cell sorting after 1 and 2 weeks. The GFP marker was subsequently removed by heterologous expression of Flp recombinase. Production of HGF/SF and LAMP3d was stable over several months. 1.2 mg HGF/SF and 0.9 mg LAMP3d were purified per litre of culture, respectively. Homogenous glycoprotein preparations were amenable to enzymatic deglycosylation under native conditions. Purified and deglycosylated LAMP3d protein was readily crystallized. The combination of FACS and gene excision described here constitutes a robust and fast procedure for maximizing the yield of glycoproteins for structural analysis from glycosylation mutant cell lines.

  1. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL performs better during the warm months, while during the winter time the discrepancies with radar measurements tends to maximum values. A stable behavior of the 183-WSL algorithm is demonstrated over the whole study period with an overall overestimation for rain rates intensities lower than 1 millimeter per hour. This threshold is crucial especially in wintertime where the low precipitation regime is difficult to be classified.

  2. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  3. Stable Atlas-based Mapped Prior (STAMP) machine-learning segmentation for multicenter large-scale MRI data.

    PubMed

    Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J

    2014-09-01

    Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Discovery of a meta-stable Al-Sm phase with unknown stoichiometry using a genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Feng; McBrearty, Ian; Ott, R. T.

    Unknown crystalline phases observed during the devitrification process of glassy metal alloys significantly limit our ability to understand and control phase selection in these systems driven far from equilibrium. Here, we report a new meta-stable Al 5Sm phase identified by simultaneously searching Al-rich compositions of the Al–Sm system, using an efficient genetic algorithm. The excellent match between calculated and experimental X-ray diffraction patterns confirms that this new phase appeared in the crystallization of melt-spun Al 90Sm 10 alloys.

  5. FLiT: a field line trace code for magnetic confinement devices

    NASA Astrophysics Data System (ADS)

    Innocente, P.; Lorenzini, R.; Terranova, D.; Zanca, P.

    2017-04-01

    This paper presents a field line tracing code (FLiT) developed to study particle and energy transport as well as other phenomena related to magnetic topology in reversed-field pinch (RFP) and tokamak experiments. The code computes magnetic field lines in toroidal geometry using curvilinear coordinates (r, ϑ, ϕ) and calculates the intersections of these field lines with specified planes. The code also computes the magnetic and thermal diffusivity due to stochastic magnetic field in the collisionless limit. Compared to Hamiltonian codes, there are no constraints on the magnetic field functional formulation, which allows the integration of whichever magnetic field is required. The code uses the magnetic field computed by solving the zeroth-order axisymmetric equilibrium and the Newcomb equation for the first-order helical perturbation matching the edge magnetic field measurements in toroidal geometry. Two algorithms are developed to integrate the field lines: one is a dedicated implementation of a first-order semi-implicit volume-preserving integration method, and the other is based on the Adams-Moulton predictor-corrector method. As expected, the volume-preserving algorithm is accurate in conserving divergence, but slow because the low integration order requires small amplitude steps. The second algorithm proves to be quite fast and it is able to integrate the field lines in many partially and fully stochastic configurations accurately. The code has already been used to study the core and edge magnetic topology of the RFX-mod device in both the reversed-field pinch and tokamak magnetic configurations.

  6. Self-tuning regulators for multicyclic control of helicopter vibration

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1982-01-01

    A class of algorithms for the multicyclic control of helicopter vibration and loads is derived and discussed. This class is characterized by a linear, quasi-static, frequency-domain model of the helicopter response to control; identification of the helicopter model by least-squared-error or Kalman filter methods; and a minimum variance or quadratic performance function controller. Previous research on such controllers is reviewed. The derivations and discussions cover the helicopter model; the identification problem, including both off-line and on-line (recursive) algorithms; the control problem, including both open-loop and closed-loop feedback; and the various regulator configurations possible within this class. Conclusions from analysis and numerical simulations of the regulators provide guidance in the design and selection of algorithms for further development, including wind tunnel and flight tests.

  7. Control Software for a High-Performance Telerobot

    NASA Technical Reports Server (NTRS)

    Kline-Schoder, Robert J.; Finger, William

    2005-01-01

    A computer program for controlling a high-performance, force-reflecting telerobot has been developed. The goal in designing a telerobot-control system is to make the velocity of the slave match the master velocity, and the environmental force on the master match the force on the slave. Instability can arise from even small delays in propagation of signals between master and slave units. The present software, based on an impedance-shaping algorithm, ensures stability even in the presence of long delays. It implements a real-time algorithm that processes position and force measurements from the master and slave and represents the master/slave communication link as a transmission line. The algorithm also uses the history of the control force and the slave motion to estimate the impedance of the environment. The estimate of the impedance of the environment is used to shape the controlled slave impedance to match the transmission-line impedance. The estimate of the environmental impedance is used to match the master and transmission-line impedances and to estimate the slave/environment force in order to present that force immediately to the operator via the master unit.

  8. Next Day Building Load Predictions based on Limited Input Features Using an On-Line Laterally Primed Adaptive Resonance Theory Artificial Neural Network.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Christian Birk; Robinson, Matt; Yasaei, Yasser

    Optimal integration of thermal energy storage within commercial building applications requires accurate load predictions. Several methods exist that provide an estimate of a buildings future needs. Methods include component-based models and data-driven algorithms. This work implemented a previously untested algorithm for this application that is called a Laterally Primed Adaptive Resonance Theory (LAPART) artificial neural network (ANN). The LAPART algorithm provided accurate results over a two month period where minimal historical data and a small amount of input types were available. These results are significant, because common practice has often overlooked the implementation of an ANN. ANN have often beenmore » perceived to be too complex and require large amounts of data to provide accurate results. The LAPART neural network was implemented in an on-line learning manner. On-line learning refers to the continuous updating of training data as time occurs. For this experiment, training began with a singe day and grew to two months of data. This approach provides a platform for immediate implementation that requires minimal time and effort. The results from the LAPART algorithm were compared with statistical regression and a component-based model. The comparison was based on the predictions linear relationship with the measured data, mean squared error, mean bias error, and cost savings achieved by the respective prediction techniques. The results show that the LAPART algorithm provided a reliable and cost effective means to predict the building load for the next day.« less

  9. The method for homography estimation between two planes based on lines and points

    NASA Astrophysics Data System (ADS)

    Shemiakina, Julia; Zhukovsky, Alexander; Nikolaev, Dmitry

    2018-04-01

    The paper considers the problem of estimating a transform connecting two images of one plane object. The method based on RANSAC is proposed for calculating the parameters of projective transform which uses points and lines correspondences simultaneously. A series of experiments was performed on synthesized data. Presented results show that the algorithm convergence rate is significantly higher when actual lines are used instead of points of lines intersection. When using both lines and feature points it is shown that the convergence rate does not depend on the ratio between lines and feature points in the input dataset.

  10. A Random Forest-based ensemble method for activity recognition.

    PubMed

    Feng, Zengtao; Mo, Lingfei; Li, Meng

    2015-01-01

    This paper presents a multi-sensor ensemble approach to human physical activity (PA) recognition, using random forest. We designed an ensemble learning algorithm, which integrates several independent Random Forest classifiers based on different sensor feature sets to build a more stable, more accurate and faster classifier for human activity recognition. To evaluate the algorithm, PA data collected from the PAMAP (Physical Activity Monitoring for Aging People), which is a standard, publicly available database, was utilized to train and test. The experimental results show that the algorithm is able to correctly recognize 19 PA types with an accuracy of 93.44%, while the training is faster than others. The ensemble classifier system based on the RF (Random Forest) algorithm can achieve high recognition accuracy and fast calculation.

  11. Generation of stable human cell lines with Tetracycline-inducible (Tet-on) shRNA or cDNA expression.

    PubMed

    Gomez-Martinez, Marta; Schmitz, Debora; Hergovich, Alexander

    2013-03-05

    A major approach in the field of mammalian cell biology is the manipulation of the expression of genes of interest in selected cell lines, with the aim to reveal one or several of the gene's function(s) using transient/stable overexpression or knockdown of the gene of interest. Unfortunately, for various cell biological investigations this approach is unsuitable when manipulations of gene expression result in cell growth/proliferation defects or unwanted cell differentiation. Therefore, researchers have adapted the Tetracycline repressor protein (TetR), taken from the E. coli tetracycline resistance operon(1), to generate very efficient and tight regulatory systems to express cDNAs in mammalian cells(2,3). In short, TetR has been modified to either (1) block initiation of transcription by binding to the Tet-operator (TO) in the promoter region upon addition of tetracycline (termed Tet-off system) or (2) bind to the TO in the absence of tetracycline (termed Tet-on system) (Figure 1). Given the inconvenience that the Tet-off system requires the continuous presence of tetracycline (which has a half-life of about 24 hr in tissue cell culture medium) the Tet-on system has been more extensively optimized, resulting in the development of very tight and efficient vector systems for cDNA expression as used here. Shortly after establishment of RNA interference (RNAi) for gene knockdown in mammalian cells(4), vectors expressing short-hairpin RNAs (shRNAs) were described that function very similar to siRNAs(5-11). However, these shRNA-mediated knockdown approaches have the same limitation as conventional knockout strategies, since stable depletion is not feasible when gene targets are essential for cellular survival. To overcome this limitation, van de Wetering et al.(12) modified the shRNA expression vector pSUPER(5) by inserting a TO in the promoter region, which enabled them to generate stable cell lines with tetracycline-inducible depletion of their target genes of interest. Here, we describe a method to efficiently generate stable human Tet-on cell lines that reliably drive either inducible overexpression or depletion of the gene of interest. Using this method, we have successfully generated Tet-on cell lines which significantly facilitated the analysis of the MST/hMOB/NDR cascade in centrosome(13,14) and apoptosis signaling(15,16). In this report, we describe our vectors of choice, in addition to describing the two consecutive manipulation steps that are necessary to efficiently generate human Tet-on cell lines (Figure 2). Moreover, besides outlining a protocol for the generation of human Tet-on cell lines, we will discuss critical aspects regarding the technical procedures and the characterization of Tet-on cells.

  12. Extracting potential bus lines of Customized City Bus Service based on public transport big data

    NASA Astrophysics Data System (ADS)

    Ren, Yibin; Chen, Ge; Han, Yong; Zheng, Huangcheng

    2016-11-01

    Customized City Bus Service (CCBS) can reduce the traffic congestion and environmental pollution that caused by the increasing in private cars, effectively. This study aims to extract the potential bus lines and each line's passenger density of CCBS by mining the public transport big data. The datasets used in this study are mainly Smart Card Data (SCD) and bus GPS data of Qingdao, China, from October 11th and November 7th 2015. Firstly, we compute the temporal-origin-destination (TOD) of passengers by mining SCD and bus GPS data. Compared with the traditional OD, TOD not only has the spatial location, but also contains the trip's boarding time. Secondly, based on the traditional DBSCAN algorithm, we put forwards an algorithm, named TOD-DBSCAN, combined with the spatial-temporal features of TOD.TOD-DBSCAN is used to cluster the TOD trajectories in peak hours of all working days. Then, we define two variables P and N to describe the possibility and passenger destiny of a potential CCBS line. P is the probability of the CCBS line. And N represents the potential passenger destiny of the line. Lastly, we visualize the potential CCBS lines extracted by our procedure on the map and analyse relationship between potential CCBS lines and the urban spatial structure.

  13. Plasma stable, pH-sensitive fusogenic polymer-modified liposomes: A promising carrier for mitoxantrone.

    PubMed

    Ghanbarzadeh, Saeed; Arami, Sanam; Pourmoazzen, Zhaleh; Ghasemian-Yadegari, Javad; Khorrami, Arash

    2014-07-01

    pH-sensitive liposomes are designed to undergo acid-triggered destabilization. In the present study, we prepared polymer-modified, plasma stable, pH-sensitive fusogenic mitoxantrone liposomes to increase efficacy and selectivity on cancer cell lines. Conventional liposomes were prepared using cholesterol and dipalmitoyl-sn-glycero-3-phosphatidylethanolamine. Dioleoylphosphatidylethanolamine and a cholesteryl derivative, poly(monomethylitaconate)-co-poly(N,N-dimethylaminoethyl methacrylate) (PMMI-co-PDMAEMA), were used for the preparation of pH-sensitive fusogenic liposomes. Using polyethylene glycol (PEG)-poly(monomethylitaconate)-CholC6 (PEG-PMMI-CholC6) copolymers instead of cholesterol introduced pH-sensitive and plasma stability properties simultaneously in prepared liposomes. All formulations were prepared by thin film hydration method and subsequently, pH-sensitivity and stability in human serum were evaluated. The ability of pH-sensitive fusogenic liposomes to enhance the mitoxantrone cytotoxicity and selectivity in cancerous cell lines was assessed in vitro compared to normal cell line using human breast cancer cell line (MCF-7), human prostate cancer cell line (PC-3), and human umbilical vein endothelial cells line. Results revealed that both PMMI-co-PDMAEMA and PEG-PMMI-CholC6-based formulations showed pH-sensitive property and were found to rapidly release mitoxantrone under mildly acidic conditions. Nevertheless, only the PEG-PMMI-CholC6-based liposomes preserved pH-sensitivity after incubation in plasma. Mitoxantrone loaded-pH-sensitive fusogenic liposomes exhibited a higher cytotoxicity than the control conventional liposomes on MCF-7 and PC-3 cell lines. On the contrary, both pH-sensitive fusogenic liposomes showed lower cytotoxic effect on human umbilical vein endothelial cell line. Plasma stable, pH-sensitive fusogenic liposomes are promising carriers for enhancing the efficiency and selectivity, besides reduction of the side effects of anticancer agents. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. Reference gene selection for quantitative real-time PCR in Solanum lycopersicum L. inoculated with the mycorrhizal fungus Rhizophagus irregularis.

    PubMed

    Fuentes, Alejandra; Ortiz, Javier; Saavedra, Nicolás; Salazar, Luis A; Meneses, Claudio; Arriagada, Cesar

    2016-04-01

    The gene expression stability of candidate reference genes in the roots and leaves of Solanum lycopersicum inoculated with arbuscular mycorrhizal fungi was investigated. Eight candidate reference genes including elongation factor 1 α (EF1), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), phosphoglycerate kinase (PGK), protein phosphatase 2A (PP2Acs), ribosomal protein L2 (RPL2), β-tubulin (TUB), ubiquitin (UBI) and actin (ACT) were selected, and their expression stability was assessed to determine the most stable internal reference for quantitative PCR normalization in S. lycopersicum inoculated with the arbuscular mycorrhizal fungus Rhizophagus irregularis. The stability of each gene was analysed in leaves and roots together and separated using the geNorm and NormFinder algorithms. Differences were detected between leaves and roots, varying among the best-ranked genes depending on the algorithm used and the tissue analysed. PGK, TUB and EF1 genes showed higher stability in roots, while EF1 and UBI had higher stability in leaves. Statistical algorithms indicated that the GAPDH gene was the least stable under the experimental conditions assayed. Then, we analysed the expression levels of the LePT4 gene, a phosphate transporter whose expression is induced by fungal colonization in host plant roots. No differences were observed when the most stable genes were used as reference genes. However, when GAPDH was used as the reference gene, we observed an overestimation of LePT4 expression. In summary, our results revealed that candidate reference genes present variable stability in S. lycopersicum arbuscular mycorrhizal symbiosis depending on the algorithm and tissue analysed. Thus, reference gene selection is an important issue for obtaining reliable results in gene expression quantification. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  15. Fast intersection detection algorithm for PC-based robot off-line programming

    NASA Astrophysics Data System (ADS)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  16. Investigation of frame-to-frame back projection and feature selection algorithms for non-line-of-sight laser gated viewing

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Velten, Andreas

    2014-10-01

    In the present paper, we discuss new approaches to analyze laser gated viewing data for non-line-of-sight vision with a novel frame-to-frame back projection as well as feature selection algorithms. While first back projection approaches use time transients for each pixel, our new method has the ability to calculate the projection of imaging data on the obscured voxel space for each frame. Further, four different data analysis algorithms were studied with the aim to identify and select signals from different target positions. A slight modification of commonly used filters leads to powerful selection of local maximum values. It is demonstrated that the choice of the filter has impact on the selectivity i.e. multiple target detection as well as on the localization precision.

  17. GENERAL: Application of Symplectic Algebraic Dynamics Algorithm to Circular Restricted Three-Body Problem

    NASA Astrophysics Data System (ADS)

    Lu, Wei-Tao; Zhang, Hua; Wang, Shun-Jin

    2008-07-01

    Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP.

  18. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.

  19. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-24

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l 1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPowermore » Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less

  20. Flat-walled multilayered anechoic linings: Optimization and application

    NASA Astrophysics Data System (ADS)

    Xu, Jingfeng; Buchholz, Jörg M.; Fricke, Fergus R.

    2005-11-01

    The concept of flat-walled multilayered absorbent linings for anechoic rooms was proposed three decades ago. Flat-walled linings have the advantage of being less complicated and, hence, less costly to manufacture and install than the individual units such as wedges. However, there are difficulties in optimizing the design of such absorbent linings. In the present work, the design of a flat-walled multilayered anechoic lining that targeted a 250 Hz cut-off frequency and a 300 mm maximum lining thickness was first optimized using an evolutionary algorithm. Sixteen of the most commonly used commercial fibrous building insulation materials available in Australia were investigated and fourteen design options (i.e., material combinations) were found by the evolutionary algorithm. These options were then evaluated in accordance with their costs and measured acoustic absorption performances. Finally, the completed anechoic room, where the optimized design was applied, was qualified and the results showed that a large percentage (75%-85%) of the distance between the sound source and the room boundaries, on the traverses made, were anechoic.

  1. A modified conjugate gradient coefficient with inexact line search for unconstrained optimization

    NASA Astrophysics Data System (ADS)

    Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa

    2016-11-01

    Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.

  2. On-line prognosis of fatigue crack propagation based on Gaussian weight-mixture proposal particle filter.

    PubMed

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo

    2018-01-01

    Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. [Research on magnetic coupling centrifugal blood pump control based on a self-tuning fuzzy PI algorithm].

    PubMed

    Yang, Lei; Yang, Ming; Xu, Zihao; Zhuang, Xiaoqi; Wang, Wei; Zhang, Haibo; Han, Lu; Xu, Liang

    2014-10-01

    The purpose of this paper is to report the research and design of control system of magnetic coupling centrifugal blood pump in our laboratory, and to briefly describe the structure of the magnetic coupling centrifugal blood pump and principles of the body circulation model. The performance of blood pump is not only related to materials and structure, but also depends on the control algorithm. We studied the algorithm about motor current double-loop control for brushless DC motor. In order to make the algorithm adjust parameter change in different situations, we used the self-tuning fuzzy PI control algorithm and gave the details about how to design fuzzy rules. We mainly used Matlab Simulink to simulate the motor control system to test the performance of algorithm, and briefly introduced how to implement these algorithms in hardware system. Finally, by building the platform and conducting experiments, we proved that self-tuning fuzzy PI control algorithm could greatly improve both dynamic and static performance of blood pump and make the motor speed and the blood pump flow stable and adjustable.

  4. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  5. The psychopharmacology algorithm project at the Harvard South Shore Program: an algorithm for acute mania.

    PubMed

    Mohammad, Othman; Osser, David N

    2014-01-01

    This new algorithm for the pharmacotherapy of acute mania was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. The authors conducted a literature search in PubMed and reviewed key studies, other algorithms and guidelines, and their references. Treatments were prioritized considering three main considerations: (1) effectiveness in treating the current episode, (2) preventing potential relapses to depression, and (3) minimizing side effects over the short and long term. The algorithm presupposes that clinicians have made an accurate diagnosis, decided how to manage contributing medical causes (including substance misuse), discontinued antidepressants, and considered the patient's childbearing potential. We propose different algorithms for mixed and nonmixed mania. Patients with mixed mania may be treated first with a second-generation antipsychotic, of which the first choice is quetiapine because of its greater efficacy for depressive symptoms and episodes in bipolar disorder. Valproate and then either lithium or carbamazepine may be added. For nonmixed mania, lithium is the first-line recommendation. A second-generation antipsychotic can be added. Again, quetiapine is favored, but if quetiapine is unacceptable, risperidone is the next choice. Olanzapine is not considered a first-line treatment due to its long-term side effects, but it could be second-line. If the patient, whether mixed or nonmixed, is still refractory to the above medications, then depending on what has already been tried, consider carbamazepine, haloperidol, olanzapine, risperidone, and valproate first tier; aripiprazole, asenapine, and ziprasidone second tier; and clozapine third tier (because of its weaker evidence base and greater side effects). Electroconvulsive therapy may be considered at any point in the algorithm if the patient has a history of positive response or is intolerant of medications.

  6. Authentication Based on Pole-zero Models of Signature Velocity

    PubMed Central

    Rashidi, Saeid; Fallah, Ali; Towhidkhah, Farzad

    2013-01-01

    With the increase of communication and financial transaction through internet, on-line signature verification is an accepted biometric technology for access control and plays a significant role in authenticity and authorization in modernized society. Therefore, fast and precise algorithms for the signature verification are very attractive. The goal of this paper is modeling of velocity signal that pattern and properties is stable for persons. With using pole-zero models based on discrete cosine transform, precise method is proposed for modeling and then features is founded from strokes. With using linear, parzen window and support vector machine classifiers, the signature verification technique was tested with a large number of authentic and forgery signatures and has demonstrated the good potential of this technique. The signatures are collected from three different database include a proprietary database, the SVC2004 and the Sabanci University signature database benchmark databases. Experimental results based on Persian, SVC2004 and SUSIG databases show that our method achieves an equal error rate of 5.91%, 5.62% and 3.91% in the skilled forgeries, respectively. PMID:24696797

  7. Performance-scalable volumetric data classification for online industrial inspection

    NASA Astrophysics Data System (ADS)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  8. Tunable diode laser absorption spectroscopy-based tomography system for on-line monitoring of two-dimensional distributions of temperature and H2O mole fraction.

    PubMed

    Xu, Lijun; Liu, Chang; Jing, Wenyang; Cao, Zhang; Xue, Xin; Lin, Yuzhen

    2016-01-01

    To monitor two-dimensional (2D) distributions of temperature and H2O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors' knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H2O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm(-1) (1343.3 nm) and 7185.6 cm(-1) (1391.67 nm), respectively. The tomographic sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H2O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H2O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.

  9. Tunable diode laser absorption spectroscopy-based tomography system for on-line monitoring of two-dimensional distributions of temperature and H2O mole fraction

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Liu, Chang; Jing, Wenyang; Cao, Zhang; Xue, Xin; Lin, Yuzhen

    2016-01-01

    To monitor two-dimensional (2D) distributions of temperature and H2O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors' knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H2O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm-1 (1343.3 nm) and 7185.6 cm-1 (1391.67 nm), respectively. The tomographic sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H2O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H2O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.

  10. Tunable diode laser absorption spectroscopy-based tomography system for on-line monitoring of two-dimensional distributions of temperature and H{sub 2}O mole fraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Lijun, E-mail: lijunxu@buaa.edu.cn; Liu, Chang; Jing, Wenyang

    2016-01-15

    To monitor two-dimensional (2D) distributions of temperature and H{sub 2}O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors’ knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H{sub 2}O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm{sup −1} (1343.3 nm) and 7185.6 cm{sup −1} (1391.67 nm), respectively. The tomographicmore » sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H{sub 2}O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H{sub 2}O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.« less

  11. Comparison of three coding strategies for a low cost structure light scanner

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2014-12-01

    Coded structure light is widely used for 3D scanning, and different coding strategies are adopted to suit for different goals. In this paper, three coding strategies are compared, and one of them is selected to implement a low cost structure light scanner under the cost of €100. To reach this goal, the projector and the video camera must be the cheapest, which will lead to some problems related to light coding. For a cheapest projector, complex intensity pattern can't be generated; even if it can be generated, it can't be captured by a cheapest camera. Based on Gray code, three different strategies are implemented and compared, called phase-shift, line-shift, and bit-shift, respectively. The bit-shift Gray code is the contribution of this paper, in which a simple, stable light pattern is used to generate dense(mean points distance<0.4mm) and accurate(mean error<0.1mm) results. The whole algorithm details and some example are presented in the papers.

  12. Handwritten digits recognition based on immune network

    NASA Astrophysics Data System (ADS)

    Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe

    2011-11-01

    With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.

  13. Combing VFH with bezier for motion planning of an autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Ye, Feng; Yang, Jing; Ma, Chao; Rong, Haijun

    2017-08-01

    Vector Field Histogram (VFH) is a method for mobile robot obstacle avoidance. However, due to the nonholonomic constraints of the vehicle, the algorithm is seldom applied to autonomous vehicles. Especially when we expect the vehicle to reach target location in a certain direction, the algorithm is often unsatisfactory. Fortunately, the Bezier Curve is defined by the states of the starting point and the target point. We can use this feature to make the vehicle in the expected direction. Therefore, we propose an algorithm to combine the Bezier Curve with the VFH algorithm, to search for the collision-free states with the VFH search method, and to select the optimal trajectory point with the Bezier Curve as the reference line. This means that we will improve the cost function in the VFH algorithm by comparing the distance between candidate directions and reference line. Finally, select the closest direction to the reference line to be the optimal motion direction.

  14. Influence of weather on the stable isotopic ratios of wines: Tools for weather/climate reconstruction?

    NASA Astrophysics Data System (ADS)

    Ingraham, Neil L.; Caldwell, Eric A.

    1999-01-01

    Precipitation, local ground water, soil water, atmospheric water vapor, grape leaf and grape berry water just prior to harvest, and grape must during the wine-making process, from the Napa Valley in northern California were collected for stable isotopic analysis. In addition, 27 red wines and 4 white wines produced in the Napa Valley, and 8 red wines produced in Livermore Valley located over 110 km to the southeast, were analyzed for both oxygen and hydrogen isotopic compositions. The isotopic compositions of the grape leaf water fall on a transpiration line with a slope of 2.1, while those of the grape berry water fall on a transpiration line with a slope of 2.8. The stable isotopic compositions of the 27 red wines from the Napa Valley range from -3 to +20‰ in δD and from +4.6 to + 10.2‰ in δ18O and plot along a line described by δD = 3.4 δ18O - 17.2. The maximum difference in the stable oxygen composition between two wineries 110 km apart is only 1.4‰, while the differences between the vintage years within each winery are 4.8 and 5.8‰ in δ18O. The stable isotopic composition of the grape water is controlled by transpiration in the weeks prior to harvest, overshadowing all other effects. As a result of the timing of harvest, the red wines are some 4 to 5‰ more enriched in δ18O than the white wines.

  15. A stable partitioned FSI algorithm for rigid bodies and incompressible flow. Part I: Model problem analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.

    A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forcesmore » on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. Here, the numerical scheme is verified on a number of difficult benchmark problems.« less

  16. A stable partitioned FSI algorithm for rigid bodies and incompressible flow. Part I: Model problem analysis

    DOE PAGES

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; ...

    2017-01-20

    A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forcesmore » on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. Here, the numerical scheme is verified on a number of difficult benchmark problems.« less

  17. Fuzzy Mixed Assembly Line Sequencing and Scheduling Optimization Model Using Multiobjective Dynamic Fuzzy GA

    PubMed Central

    Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari

    2014-01-01

    A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962

  18. LentiPro26: novel stable cell lines for constitutive lentiviral vector production.

    PubMed

    Tomás, H A; Rodrigues, A F; Carrondo, M J T; Coroadinha, A S

    2018-03-27

    Lentiviral vectors (LVs) are excellent tools to promote gene transfer and stable gene expression. Their potential has been already demonstrated in gene therapy clinical trials for the treatment of diverse disorders. For large scale LV production, a stable producer system is desirable since it allows scalable and cost-effective viral productions, with increased reproducibility and safety. However, the development of stable systems has been challenging and time-consuming, being the selection of cells presenting high expression levels of Gag-Pro-Pol polyprotein and the cytotoxicity associated with some viral components, the main limitations. Hereby is described the establishment of a new LV producer cell line using a mutated less active viral protease to overcome potential cytotoxic limitations. The stable transfection of bicistronic expression cassettes with re-initiation of the translation mechanism enabled the generation of LentiPro26 packaging populations supporting high titers. Additionally, by skipping intermediate clone screening steps and performing only one final clone screening, it was possible to save time and generate LentiPro26-A59 cell line, that constitutively produces titers above 10 6 TU.mL -1 .day -1 , in less than six months. This work constitutes a step forward towards the development of improved LV producer cell lines, aiming to efficiently supply the clinical expanding gene therapy applications.

  19. Mobile robot motion estimation using Hough transform

    NASA Astrophysics Data System (ADS)

    Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu

    2018-05-01

    This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.

  20. Detection of insect damage in almonds

    NASA Astrophysics Data System (ADS)

    Kim, Soowon; Schatzki, Thomas F.

    1999-01-01

    Pinhole insect damage in natural almonds is very difficult to detect on-line. Further, evidence exists relating insect damage to aflatoxin contamination. Hence, for quality and health reasons, methods to detect and remove such damaged nuts are of great importance in this study, we explored the possibility of using x-ray imaging to detect pinhole damage in almonds by insects. X-ray film images of about 2000 almonds and x-ray linescan images of only 522 pinhole damaged almonds were obtained. The pinhole damaged region appeared slightly darker than non-damaged region in x-ray negative images. A machine recognition algorithm was developed to detect these darker regions. The algorithm used the first order and the second order information to identify the damaged region. To reduce the possibility of false positive results due to germ region in high resolution images, germ detection and removal routines were also included. With film images, the algorithm showed approximately an 81 percent correct recognition ratio with only 1 percent false positives whereas line scan images correctly recognized 65 percent of pinholes with about 9 percent false positives. The algorithms was very fast and efficient requiring only minimal computation time. If implemented on line, theoretical throughput of this recognition system would be 66 nuts/second.

  1. Real-time road detection in infrared imagery

    NASA Astrophysics Data System (ADS)

    Andre, Haritini E.; McCoy, Keith

    1990-09-01

    Automatic road detection is an important part in many scene recognition applications. The extraction of roads provides a means of navigation and position update for remotely piloted vehicles or autonomous vehicles. Roads supply strong contextual information which can be used to improve the performance of automatic target recognition (ATh) systems by directing the search for targets and adjusting target classification confidences. This paper will describe algorithmic techniques for labeling roads in high-resolution infrared imagery. In addition, realtime implementation of this structural approach using a processor array based on the Martin Marietta Geometric Arithmetic Parallel Processor (GAPPTh) chip will be addressed. The algorithm described is based on the hypothesis that a road consists of pairs of line segments separated by a distance "d" with opposite gradient directions (antiparallel). The general nature of the algorithm, in addition to its parallel implementation in a single instruction, multiple data (SIMD) machine, are improvements to existing work. The algorithm seeks to identify line segments meeting the road hypothesis in a manner that performs well, even when the side of the road is fragmented due to occlusion or intersections. The use of geometrical relationships between line segments is a powerful yet flexible method of road classification which is independent of orientation. In addition, this approach can be used to nominate other types of objects with minor parametric changes.

  2. A Novel Control Strategy for Autonomous Operation of Isolated Microgrid with Prioritized Loads

    NASA Astrophysics Data System (ADS)

    Kumar, R. Hari; Ushakumari, S.

    2018-05-01

    Maintenance of power balance between generation and demand is one of the most critical requirements for the stable operation of a power system network. To mitigate the power imbalance during the occurrence of any disturbance in the system, fast acting algorithms are inevitable. This paper proposes a novel algorithm for load shedding and network reconfiguration in an isolated microgrid with prioritized loads and multiple islands, which will help to quickly restore the system in the event of a fault. The performance of the proposed algorithm is enhanced using genetic algorithm and its effectiveness is illustrated with simulation results on modified Consortium for Electric Reliability Technology Solutions (CERTS) microgrid.

  3. Firefly Algorithm for Structural Search.

    PubMed

    Avendaño-Franco, Guillermo; Romero, Aldo H

    2016-07-12

    The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers.

  4. The Design and Implementation of Indoor Localization System Using Magnetic Field Based on Smartphone

    NASA Astrophysics Data System (ADS)

    Liu, J.; Jiang, C.; Shi, Z.

    2017-09-01

    Sufficient signal nodes are mostly required to implement indoor localization in mainstream research. Magnetic field take advantage of high precision, stable and reliability, and the reception of magnetic field signals is reliable and uncomplicated, it could be realized by geomagnetic sensor on smartphone, without external device. After the study of indoor positioning technologies, choose the geomagnetic field data as fingerprints to design an indoor localization system based on smartphone. A localization algorithm that appropriate geomagnetic matching is designed, and present filtering algorithm and algorithm for coordinate conversion. With the implement of plot geomagnetic fingerprints, the indoor positioning of smartphone without depending on external devices can be achieved. Finally, an indoor positioning system which is based on Android platform is successfully designed, through the experiments, proved the capability and effectiveness of indoor localization algorithm.

  5. Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.

    PubMed

    Dastmalchi, Pouya; Veronis, Georgios

    2013-12-30

    We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.

  6. Geometry-based populated chessboard recognition

    NASA Astrophysics Data System (ADS)

    Xie, Youye; Tang, Gongguo; Hoff, William

    2018-04-01

    Chessboards are commonly used to calibrate cameras, and many robust methods have been developed to recognize the unoccupied boards. However, when the chessboard is populated with chess pieces, such as during an actual game, the problem of recognizing the board is much harder. Challenges include occlusion caused by the chess pieces, the presence of outlier lines and low viewing angles of the chessboard. In this paper, we present a novel approach to address the above challenges and recognize the chessboard. The Canny edge detector and Hough transform are used to capture all possible lines in the scene. The k-means clustering and a k-nearest-neighbors inspired algorithm are applied to cluster and reject the outlier lines based on their Euclidean distances to the nearest neighbors in a scaled Hough transform space. Finally, based on prior knowledge of the chessboard structure, a geometric constraint is used to find the correspondences between image lines and the lines on the chessboard through the homography transformation. The proposed algorithm works for a wide range of the operating angles and achieves high accuracy in experiments.

  7. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  8. Stable computations with flat radial basis functions using vector-valued rational approximations

    NASA Astrophysics Data System (ADS)

    Wright, Grady B.; Fornberg, Bengt

    2017-02-01

    One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.

  9. An acquisition system for CMOS imagers with a genuine 10 Gbit/s bandwidth

    NASA Astrophysics Data System (ADS)

    Guérin, C.; Mahroug, J.; Tromeur, W.; Houles, J.; Calabria, P.; Barbier, R.

    2012-12-01

    This paper presents a high data throughput acquisition system for pixel detector readout such as CMOS imagers. This CMOS acquisition board offers a genuine 10 Gbit/s bandwidth to the workstation and can provide an on-line and continuous high frame rate imaging capability. On-line processing can be implemented either on the Data Acquisition Board or on the multi-cores workstation depending on the complexity of the algorithms. The different parts composing the acquisition board have been designed to be used first with a single-photon detector called LUSIPHER (800×800 pixels), developed in our laboratory for scientific applications ranging from nano-photonics to adaptive optics. The architecture of the acquisition board is presented and the performances achieved by the produced boards are described. The future developments (hardware and software) concerning the on-line implementation of algorithms dedicated to single-photon imaging are tackled.

  10. Reducing space overhead for independendent checkpointing

    NASA Technical Reports Server (NTRS)

    Wang, Yi-Min; Chung, Pi-Yu; Lin, In-Jen; Fuchs, W. Kent

    1992-01-01

    The main disadvantages of independent checkpointing are the possible domino effect and the associated storage space overhead for maintaining multiple checkpoints. In most previous work, it has been assumed that only the checkpoints older than the current global recovery line can be discarded. Here, we generalize a notion of recovery line to potential recovery line. Only the checkpoints belonging to at least one of the potential recovery lines cannot be discarded. By using the model of maximum-sized antichains on a partially ordered set, an efficient algorithm is developed for finding all non-discardable checkpoints, and we show that the number of non-discardable checkpoints cannot exceed N(N+1)/2, where N is the number of processors. Communication trace driven simulation for several hypercube programs is performed to show the benefit of the proposed algorithm for real applications.

  11. Heuristics for Multiobjective Optimization of Two-Sided Assembly Line Systems

    PubMed Central

    Jawahar, N.; Ponnambalam, S. G.; Sivakumar, K.; Thangadurai, V.

    2014-01-01

    Products such as cars, trucks, and heavy machinery are assembled by two-sided assembly line. Assembly line balancing has significant impacts on the performance and productivity of flow line manufacturing systems and is an active research area for several decades. This paper addresses the line balancing problem of a two-sided assembly line in which the tasks are to be assigned at L side or R side or any one side (addressed as E). Two objectives, minimum number of workstations and minimum unbalance time among workstations, have been considered for balancing the assembly line. There are two approaches to solve multiobjective optimization problem: first approach combines all the objectives into a single composite function or moves all but one objective to the constraint set; second approach determines the Pareto optimal solution set. This paper proposes two heuristics to evolve optimal Pareto front for the TALBP under consideration: Enumerative Heuristic Algorithm (EHA) to handle problems of small and medium size and Simulated Annealing Algorithm (SAA) for large-sized problems. The proposed approaches are illustrated with example problems and their performances are compared with a set of test problems. PMID:24790568

  12. An atlas of Rapp's 180-th order geopotential.

    NASA Astrophysics Data System (ADS)

    Melvin, P. J.

    1986-08-01

    Deprit's 1979 approach to the summation of the spherical harmonic expansion of the geopotential has been modified to spherical components and normalized Legendre polynomials. An algorithm has been developed which produces ten fields at the users option: the undulations of the geoid, three anomalous components of the gravity vector, or six components of the Hessian of the geopotential (gravity gradient). The algorithm is stable to high orders in single precision and does not treat the polar regions as a special case. Eleven contour maps of components of the anomalous geopotential on the surface of the ellipsoid are presented to validate the algorithm.

  13. Algorithm For Hypersonic Flow In Chemical Equilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.

  14. Verification of fluid-structure-interaction algorithms through the method of manufactured solutions for actuator-line applications

    NASA Astrophysics Data System (ADS)

    Vijayakumar, Ganesh; Sprague, Michael

    2017-11-01

    Demonstrating expected convergence rates with spatial- and temporal-grid refinement is the ``gold standard'' of code and algorithm verification. However, the lack of analytical solutions and generating manufactured solutions presents challenges for verifying codes for complex systems. The application of the method of manufactured solutions (MMS) for verification for coupled multi-physics phenomena like fluid-structure interaction (FSI) has only seen recent investigation. While many FSI algorithms for aeroelastic phenomena have focused on boundary-resolved CFD simulations, the actuator-line representation of the structure is widely used for FSI simulations in wind-energy research. In this work, we demonstrate the verification of an FSI algorithm using MMS for actuator-line CFD simulations with a simplified structural model. We use a manufactured solution for the fluid velocity field and the displacement of the SMD system. We demonstrate the convergence of both the fluid and structural solver to second-order accuracy with grid and time-step refinement. This work was funded by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Wind Energy Technologies Office, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  15. Artificial intelligence tools for pattern recognition

    NASA Astrophysics Data System (ADS)

    Acevedo, Elena; Acevedo, Antonio; Felipe, Federico; Avilés, Pedro

    2017-06-01

    In this work, we present a system for pattern recognition that combines the power of genetic algorithms for solving problems and the efficiency of the morphological associative memories. We use a set of 48 tire prints divided into 8 brands of tires. The images have dimensions of 200 x 200 pixels. We applied Hough transform to obtain lines as main features. The number of lines obtained is 449. The genetic algorithm reduces the number of features to ten suitable lines that give thus the 100% of recognition. Morphological associative memories were used as evaluation function. The selection algorithms were Tournament and Roulette wheel. For reproduction, we applied one-point, two-point and uniform crossover.

  16. Three-Dimensional Stable Nonorthogonal FDTD Algorithm with Adaptive Mesh Refinement for Solving Maxwell’s Equations

    DTIC Science & Technology

    2013-03-01

    Räisänen. An efficient FDTD algorithm for the analysis of microstrip patch antennas printed on a general anisotropic dielectric substrate. IEEE...applications [3, 21, 22], including antenna , microwave circuits, geophysics, optics, etc. The Ground Penetrating Radar (GPR) is a popular and...IEEE Trans. Antennas Propag., 41:994–999, 1993. 16 [6] S. G. Garcia, T. M. Hung-Bao, R. G. Martin, and B. G. Olmedo. On the application of finite

  17. Positive position control of robotic manipulators

    NASA Technical Reports Server (NTRS)

    Baz, A.; Gumusel, L.

    1989-01-01

    The present, simple and accurate position-control algorithm, which is applicable to fast-moving and lightly damped robot arms, is based on the positive position feedback (PPF) strategy and relies solely on position sensors to monitor joint angles of robotic arms to furnish stable position control. The optimized tuned filters, in the form of a set of difference equations, manipulate position signals for robotic system performance. Attention is given to comparisons between this PPF-algorithm controller's experimentally ascertained performance characteristics and those of a conventional proportional controller.

  18. A stable partitioned FSI algorithm for rigid bodies and incompressible flow. Part I: Model problem analysis

    NASA Astrophysics Data System (ADS)

    Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi

    2017-08-01

    A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added-mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this first part of a two-part series, the properties of the AMP scheme are motivated and evaluated through the development and analysis of some model problems. The analysis shows when and why the traditional partitioned scheme becomes unstable due to either added-mass or added-damping effects. The analysis also identifies the proper form of the added-damping which depends on the discrete time-step and the grid-spacing normal to the rigid body. The results of the analysis are confirmed with numerical simulations that also demonstrate a second-order accurate implementation of the AMP scheme.

  19. SDSS-IV eBOSS emission-line galaxy pilot survey

    DOE PAGES

    Comparat, J.; Delubac, T.; Jouvel, S.; ...

    2016-08-09

    The Sloan Digital Sky Survey IV extended Baryonic Oscillation Spectroscopic Survey (SDSS-IV/eBOSS) will observe 195,000 emission-line galaxies (ELGs) to measure the Baryonic Acoustic Oscillation standard ruler (BAO) at redshift 0.9. To test different ELG selection algorithms, 9,000 spectra were observed with the SDSS spectrograph as a pilot survey based on data from several imaging surveys. First, using visual inspection and redshift quality flags, we show that the automated spectroscopic redshifts assigned by the pipeline meet the quality requirements for a reliable BAO measurement. We also show the correlations between sky emission, signal-to-noise ratio in the emission lines, and redshift error.more » Then we provide a detailed description of each target selection algorithm we tested and compare them with the requirements of the eBOSS experiment. As a result, we provide reliable redshift distributions for the different target selection schemes we tested. Lastly, we determine an target selection algorithms that is best suited to be applied on DECam photometry because they fulfill the eBOSS survey efficiency requirements.« less

  20. A high cell density transient transfection system for therapeutic protein expression based on a CHO GS-knockout cell line: process development and product quality assessment.

    PubMed

    Rajendra, Yashas; Hougland, Maria D; Alam, Riazul; Morehead, Teresa A; Barnard, Gavin C

    2015-05-01

    Transient gene expression (TGE) is a rapid method for the production of recombinant proteins in mammalian cells. While the volumetric productivity of TGE has improved significantly over the past decade, most methods involve extensive cell line engineering and plasmid vector optimization in addition to long fed batch cultures lasting up to 21 days. Our colleagues have recently reported the development of a CHO K1SV GS-KO host cell line. By creating a bi-allelic glutamine synthetase knock out of the original CHOK1SV host cell line, they were able to improve the efficiency of generating high producing stable CHO lines for drug product manufacturing. We developed a TGE method using the same CHO K1SV GS-KO host cell line without any further cell line engineering. We also refrained from performing plasmid vector engineering. Our objective was to setup a TGE process to mimic protein quality attributes obtained from stable CHO cell line. Polyethyleneimine (PEI)-mediated transfections were performed at high cell density (4 × 10(6) cells/mL) followed by immediate growth arrest at 32 °C for 7 days. Optimizing DNA and PEI concentrations proved to be important. Interestingly, found the direct transfection method (where DNA and PEI were added sequentially) to be superior to the more common indirect method (where DNA and PEI are first pre-complexed). Moreover, the addition of a single feed solution and a polar solvent (N,N dimethylacetamide) significantly increased product titers. The scalability of process from 2 mL to 2 L was demonstrated using multiple proteins and multiple expression volumes. Using this simple, short, 7-day TGE process, we were able to successfully produce 54 unique proteins in a fraction of the time that would have been required to produce the respective stable CHO cell lines. The list of 54 unique proteins includes mAbs, bispecific antibodies, and Fc-fusion proteins. Antibody titers of up to 350 mg/L were achieved with the simple 7-day process. Titers were increased to 1 g/L by extending the culture to 16 days. We also present two case studies comparing product quality of material generated by transient HEK293, transient CHO K1SV GS-KO, and stable CHO K1SV KO pool. Protein from transient CHO was more representative of stable CHO protein compared to protein produced from HEK293. © 2014 Wiley Periodicals, Inc.

  1. An efficient parallel algorithm for the calculation of unrestricted canonical MP2 energies.

    PubMed

    Baker, Jon; Wolinski, Krzysztof

    2011-11-30

    We present details of our efficient implementation of full accuracy unrestricted open-shell second-order canonical Møller-Plesset (MP2) energies, both serial and parallel. The algorithm is based on our previous restricted closed-shell MP2 code using the Saebo-Almlöf direct integral transformation. Depending on system details, UMP2 energies take from less than 1.5 to about 3.0 times as long as a closed-shell RMP2 energy on a similar system using the same algorithm. Several examples are given including timings for some large stable radicals with 90+ atoms and over 3600 basis functions. Copyright © 2011 Wiley Periodicals, Inc.

  2. Optimization of self-interstitial clusters in 3C-SiC with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ko, Hyunseok; Kaczmarowski, Amy; Szlufarska, Izabela; Morgan, Dane

    2017-08-01

    Under irradiation, SiC develops damage commonly referred to as black spot defects, which are speculated to be self-interstitial atom clusters. To understand the evolution of these defect clusters and their impacts (e.g., through radiation induced swelling) on the performance of SiC in nuclear applications, it is important to identify the cluster composition, structure, and shape. In this work the genetic algorithm code StructOpt was utilized to identify groundstate cluster structures in 3C-SiC. The genetic algorithm was used to explore clusters of up to ∼30 interstitials of C-only, Si-only, and Si-C mixtures embedded in the SiC lattice. We performed the structure search using Hamiltonians from both density functional theory and empirical potentials. The thermodynamic stability of clusters was investigated in terms of their composition (with a focus on Si-only, C-only, and stoichiometric) and shape (spherical vs. planar), as a function of the cluster size (n). Our results suggest that large Si-only clusters are likely unstable, and clusters are predominantly C-only for n ≤ 10 and stoichiometric for n > 10. The results imply that there is an evolution of the shape of the most stable clusters, where small clusters are stable in more spherical geometries while larger clusters are stable in more planar configurations. We also provide an estimated energy vs. size relationship, E(n), for use in future analysis.

  3. A Novel Dynamic Physical Layer Impairment-Aware Routing and Wavelength Assignment (PLI-RWA) Algorithm for Mixed Line Rate (MLR) Wavelength Division Multiplexed (WDM) Optical Networks

    NASA Astrophysics Data System (ADS)

    Iyer, Sridhar

    2016-12-01

    The ever-increasing global Internet traffic will inevitably lead to a serious upgrade of the current optical networks' capacity. The legacy infrastructure can be enhanced not only by increasing the capacity but also by adopting advance modulation formats, having increased spectral efficiency at higher data rate. In a transparent mixed-line-rate (MLR) optical network, different line rates, on different wavelengths, can coexist on the same fiber. Migration to data rates higher than 10 Gbps requires the implementation of phase modulation schemes. However, the co-existing on-off keying (OOK) channels cause critical physical layer impairments (PLIs) to the phase modulated channels, mainly due to cross-phase modulation (XPM), which in turn limits the network's performance. In order to mitigate this effect, a more sophisticated PLI-Routing and Wavelength Assignment (PLI-RWA) scheme needs to be adopted. In this paper, we investigate the critical impairment for each data rate and the way it affects the quality of transmission (QoT). In view of the aforementioned, we present a novel dynamic PLI-RWA algorithm for MLR optical networks. The proposed algorithm is compared through simulations with the shortest path and minimum hop routing schemes. The simulation results show that performance of the proposed algorithm is better than the existing schemes.

  4. Statistics-based optimization of the polarimetric radar hydrometeor classification algorithm and its application for a squall line in South China

    NASA Astrophysics Data System (ADS)

    Wu, Chong; Liu, Liping; Wei, Ming; Xi, Baozhu; Yu, Minghui

    2018-03-01

    A modified hydrometeor classification algorithm (HCA) is developed in this study for Chinese polarimetric radars. This algorithm is based on the U.S. operational HCA. Meanwhile, the methodology of statistics-based optimization is proposed including calibration checking, datasets selection, membership functions modification, computation thresholds modification, and effect verification. Zhuhai radar, the first operational polarimetric radar in South China, applies these procedures. The systematic bias of calibration is corrected, the reliability of radar measurements deteriorates when the signal-to-noise ratio is low, and correlation coefficient within the melting layer is usually lower than that of the U.S. WSR-88D radar. Through modification based on statistical analysis of polarimetric variables, the localized HCA especially for Zhuhai is obtained, and it performs well over a one-month test through comparison with sounding and surface observations. The algorithm is then utilized for analysis of a squall line process on 11 May 2014 and is found to provide reasonable details with respect to horizontal and vertical structures, and the HCA results—especially in the mixed rain-hail region—can reflect the life cycle of the squall line. In addition, the kinematic and microphysical processes of cloud evolution and the differences between radar-detected hail and surface observations are also analyzed. The results of this study provide evidence for the improvement of this HCA developed specifically for China.

  5. Fireworks algorithm for mean-VaR/CVaR models

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  6. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  7. Semiannual Report, April 1, 1989 through September 30, 1989 (Institute for Computer Applications in Science and Engineering)

    DTIC Science & Technology

    1990-02-01

    noise. Tobias B. Orloff Work began on developing a high quality rendering algorithm based on the radiosity method. The algorithm is similar to...previous progressive radiosity algorithms except for the following improvements: 1. At each iteration vertex radiosities are computed using a modified scan...line approach, thus eliminating the quadratic cost associated with a ray tracing computation of vortex radiosities . 2. At each iteration the scene is

  8. Isotopic determination of uranium in soil by laser induced breakdown spectroscopy

    DOE PAGES

    Chan, George C. -Y.; Choi, Inhee; Mao, Xianglei; ...

    2016-03-26

    Laser-induced breakdown spectroscopy (LIBS) operated under ambient pressure has been evaluated for isotopic analysis of uranium in real-world samples such as soil, with U concentrations in the single digit percentage levels. The study addresses the requirements for spectral decomposition of 235U and 238U atomic emission peaks that are only partially resolved. Although non-linear least-square fitting algorithms are typically able to locate the optimal combination of fitting parameters that best describes the experimental spectrum even when all fitting parameters are treated as free independent variables, the analytical results of such an unconstrained free-parameter approach are ambiguous. In this work, five spectralmore » decomposition algorithms were examined, with different known physical properties (e.g., isotopic splitting, hyperfine structure) of the spectral lines sequentially incorporated into the candidate algorithms as constraints. It was found that incorporation of such spectral-line constraints into the decomposition algorithm is essential for the best isotopic analysis. The isotopic abundance of 235U was determined from a simple two-component Lorentzian fit on the U II 424.437 nm spectral profile. For six replicate measurements, each with only fifteen laser shots, on a soil sample with U concentration at 1.1% w/w, the determined 235U isotopic abundance was (64.6 ± 4.8)%, and agreed well with the certified value of 64.4%. Another studied U line - U I 682.691 nm possesses hyperfine structure that is comparatively broad and at a significant fraction as the isotopic shift. Thus, 235U isotopic analysis with this U I line was performed with spectral decomposition involving individual hyperfine components. For the soil sample with 1.1% w/w U, the determined 235U isotopic abundance was (60.9 ± 2.0)%, which exhibited a relative bias about 6% from the certified value. The bias was attributed to the spectral resolution of our measurement system - the measured line width for this U I line was larger than its isotopic splitting. In conclusion, although not the best emission line for isotopic analysis, this U I emission line is sensitive for element analysis with a detection limit of 500 ppm U in the soil matrix; the detection limit for the U II 424.437 nm line was 2000 ppm.« less

  9. GlobiPack v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe

    2010-03-31

    GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less

  10. Design of Energy Storage Management System Based on FPGA in Micro-Grid

    NASA Astrophysics Data System (ADS)

    Liang, Yafeng; Wang, Yanping; Han, Dexiao

    2018-01-01

    Energy storage system is the core to maintain the stable operation of smart micro-grid. Aiming at the existing problems of the energy storage management system in the micro-grid such as Low fault tolerance, easy to cause fluctuations in micro-grid, a new intelligent battery management system based on field programmable gate array is proposed : taking advantage of FPGA to combine the battery management system with the intelligent micro-grid control strategy. Finally, aiming at the problem that during estimation of battery charge State by neural network, initialization of weights and thresholds are not accurate leading to large errors in prediction results, the genetic algorithm is proposed to optimize the neural network method, and the experimental simulation is carried out. The experimental results show that the algorithm has high precision and provides guarantee for the stable operation of micro-grid.

  11. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  12. Red to far-red multispectral fluorescence image fusion for detection of fecal contamination on apples

    USDA-ARS?s Scientific Manuscript database

    This research developed a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet/blue LED excitation for detection of fecal contamination on Golden Delicious apples. Using a hyperspectral line-scan imaging system consisting of an EMCCD camera, spectrograph, an...

  13. Pattern Recognition by Retina-Like Devices.

    ERIC Educational Resources Information Center

    Weiman, Carl F. R.; Rothstein, Jerome

    This study has investigated some pattern recognition capabilities of devices consisting of arrays of cooperating elements acting in parallel. The problem of recognizing straight lines in general position on the quadratic lattice has been completely solved by applying parallel acting algorithms to a special code for lines on the lattice. The…

  14. Antineoplastic agents extravasation from peripheral intravenous line in children: a simple strategy for a safer nursing care.

    PubMed

    Chanes, Daniella Cristina; da Luz Gonçalves Pedreira, Mavilde; de Gutiérrez, Maria Gaby Rivero

    2012-02-01

    The antineoplastic agents infusion through peripheral lines may lead to several adverse events such as extravasation that is one of the most severe acute reactions of this sort of treatment. The extravasation prevention and management must be part of a safe and evidence-based nursing care. Due to this fact, two algorithms were developed with the purpose of guiding nursing care to children who undergo chemotherapy through peripheral line. The objectives of this study were to determine the content validity of both algorithms with pediatric oncology nurses in Brazil and United States of America, and to verify the agreement between the evaluations of both groups. A descriptive validation study was carried out through the Delphi Technique that has the following steps: development of the data collection instrument, application to the specialists, data analysis, algorithms' review, re-evaluation by the specialists, final data analysis and content validity determination. The data analysis was descriptive and based on the specialists agreement consensus equal or higher than 80% in every step of the algorithms. The process showed that the agreement with both instruments ranged from 92.8% to 99.0%. The algorithms are valid for application in nursing care with the main purpose of preventing and managing the antineoplastic agents' extravasation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach

    PubMed Central

    Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J

    2012-01-01

    A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617

  16. Prediction of anti-cancer drug response by kernelized multi-task learning.

    PubMed

    Tan, Mehmet

    2016-10-01

    Chemotherapy or targeted therapy are two of the main treatment options for many types of cancer. Due to the heterogeneous nature of cancer, the success of the therapeutic agents differs among patients. In this sense, determination of chemotherapeutic response of the malign cells is essential for establishing a personalized treatment protocol and designing new drugs. With the recent technological advances in producing large amounts of pharmacogenomic data, in silico methods have become important tools to achieve this aim. Data produced by using cancer cell lines provide a test bed for machine learning algorithms that try to predict the response of cancer cells to different agents. The potential use of these algorithms in drug discovery/repositioning and personalized treatments motivated us in this study to work on predicting drug response by exploiting the recent pharmacogenomic databases. We aim to improve the prediction of drug response of cancer cell lines. We propose to use a method that employs multi-task learning to improve learning by transfer, and kernels to extract non-linear relationships to predict drug response. The method outperforms three state-of-the-art algorithms on three anti-cancer drug screen datasets. We achieved a mean squared error of 3.305 and 0.501 on two different large scale screen data sets. On a recent challenge dataset, we obtained an error of 0.556. We report the methodological comparison results as well as the performance of the proposed algorithm on each single drug. The results show that the proposed method is a strong candidate to predict drug response of cancer cell lines in silico for pre-clinical studies. The source code of the algorithm and data used can be obtained from http://mtan.etu.edu.tr/Supplementary/kMTrace/. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. [Establishment and characterization of a new carcinoma cell line from uterine cervix of Uyghur women].

    PubMed

    Zhang, Lu; Aerziguli, Tursun; Guzalnur, Abliz

    2012-04-01

    To establish a uterine cervical carcinoma cell line of Uyghur ethnical background and to evaluate the related biological characteristics for future biomedical investigations of diseases in the Uyghur population. Poorly-differentiated squamous cell carcinoma specimens of Uyghur patients were obtained and cultured in vitro by enzymatic digestion method, followed by continuous passaging to reach a stable growth determined by cell viability and growth curve. Morphological study, cell cycling and chromosomal analysis were performed. Tumorigenesis study was conducted by inoculation of nude mice. Biomarker (CK17, CD44, Ki-67, CK14 and vimentin) expression was detected by immunofluorescence and immunocytochemical techniques. A cervical carcinoma cell line was successfully established and maintained for 12 months through 70 passages. The cell line had a stable growth with a population doubling time of 51.9 h. Flask method and double agar-agar assay showed that the cell line had colony-forming rates of 32.5% and 15.6%, respectively. Ultrastructural evaluation demonstrated numerous cell surface protrusions or microvilli, a large number of rod-shape structures in cytoplasm, typical desmosomes and nuclear atypia. Chromosomal analysis revealed human karyotype with the number of chromosomes per cell varying from 32 - 97 with a majority of 54 - 86 (60.3%). Xenogeneic tumors formed in nude mice showed histological structures identical to those of the primary tumor. The cells had high expression of CK17, CD44, Ki-67 and vimentin but no CK14 expression. A cervical carcinoma cell line from a female Uyghur patient is successfully established. The cell line has the characteristics of human cervical squamous cell carcinoma, and it is stable with maintaining the characteristic biological and morphological features in vitro for more than 12 months, therefore, qualified as a stable cell line for further biomedical research.

  18. Evolving biomarkers improve prediction of long-term mortality in patients with stable coronary artery disease: the BIO-VILCAD score.

    PubMed

    Kleber, M E; Goliasch, G; Grammer, T B; Pilz, S; Tomaschitz, A; Silbernagel, G; Maurer, G; März, W; Niessner, A

    2014-08-01

    Algorithms to predict the future long-term risk of patients with stable coronary artery disease (CAD) are rare. The VIenna and Ludwigshafen CAD (VILCAD) risk score was one of the first scores specifically tailored for this clinically important patient population. The aim of this study was to refine risk prediction in stable CAD creating a new prediction model encompassing various pathophysiological pathways. Therefore, we assessed the predictive power of 135 novel biomarkers for long-term mortality in patients with stable CAD. We included 1275 patients with stable CAD from the LUdwigshafen RIsk and Cardiovascular health study with a median follow-up of 9.8 years to investigate whether the predictive power of the VILCAD score could be improved by the addition of novel biomarkers. Additional biomarkers were selected in a bootstrapping procedure based on Cox regression to determine the most informative predictors of mortality. The final multivariable model encompassed nine clinical and biochemical markers: age, sex, left ventricular ejection fraction (LVEF), heart rate, N-terminal pro-brain natriuretic peptide, cystatin C, renin, 25OH-vitamin D3 and haemoglobin A1c. The extended VILCAD biomarker score achieved a significantly improved C-statistic (0.78 vs. 0.73; P = 0.035) and net reclassification index (14.9%; P < 0.001) compared to the original VILCAD score. Omitting LVEF, which might not be readily measureable in clinical practice, slightly reduced the accuracy of the new BIO-VILCAD score but still significantly improved risk classification (net reclassification improvement 12.5%; P < 0.001). The VILCAD biomarker score based on routine parameters complemented by novel biomarkers outperforms previous risk algorithms and allows more accurate classification of patients with stable CAD, enabling physicians to choose more personalized treatment regimens for their patients.

  19. Eye-Tracking Reveals that the Strength of the Vertical-Horizontal Illusion Increases as the Retinal Image Becomes More Stable with Fixation

    PubMed Central

    Chouinard, Philippe A.; Peel, Hayden J.; Landry, Oriane

    2017-01-01

    The closer a line extends toward a surrounding frame, the longer it appears. This is known as a framing effect. Over 70 years ago, Teodor Künnapas demonstrated that the shape of the visual field itself can act as a frame to influence the perceived length of lines in the vertical-horizontal illusion. This illusion is typically created by having a vertical line rise from the center of a horizontal line of the same length creating an inverted T figure. We aimed to determine if the degree to which one fixates on a spatial location where the two lines bisect could influence the strength of the illusion, assuming that the framing effect would be stronger when the retinal image is more stable. We performed two experiments: the visual-field and vertical-horizontal illusion experiments. The visual-field experiment demonstrated that the participants could discriminate a target more easily when it was presented along the horizontal vs. vertical meridian, confirming a framing influence on visual perception. The vertical-horizontal illusion experiment determined the effects of orientation, size and eye gaze on the strength of the illusion. As predicted, the illusion was strongest when the stimulus was presented in either its standard inverted T orientation or when it was rotated 180° compared to other orientations, and in conditions in which the retinal image was more stable, as indexed by eye tracking. Taken together, we conclude that the results provide support for Teodor Künnapas’ explanation of the vertical-horizontal illusion. PMID:28392764

  20. Personalized recommendation based on unbiased consistence

    NASA Astrophysics Data System (ADS)

    Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao

    2015-08-01

    Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.

  1. Motion Planning and Synthesis of Human-Like Characters in Constrained Environments

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjun; Pan, Jia; Manocha, Dinesh

    We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.

  2. An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling

    NASA Astrophysics Data System (ADS)

    Qiu, X. N.; Lau, H. Y. K.

    The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.

  3. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    EPA Science Inventory

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  4. Tracking Objects with Networked Scattered Directional Sensors

    NASA Astrophysics Data System (ADS)

    Plarre, Kurt; Kumar, P. R.

    2007-12-01

    We study the problem of object tracking using highly directional sensors—sensors whose field of vision is a line or a line segment. A network of such sensors monitors a certain region of the plane. Sporadically, objects moving in straight lines and at a constant speed cross the region. A sensor detects an object when it crosses its line of sight, and records the time of the detection. No distance or angle measurements are available. The task of the sensors is to estimate the directions and speeds of the objects, and the sensor lines, which are unknown a priori. This estimation problem involves the minimization of a highly nonconvex cost function. To overcome this difficulty, we introduce an algorithm, which we call "adaptive basis algorithm." This algorithm is divided into three phases: in the first phase, the algorithm is initialized using data from six sensors and four objects; in the second phase, the estimates are updated as data from more sensors and objects are incorporated. The third phase is an optional coordinated transformation. The estimation is done in an "ad-hoc" coordinate system, which we call "adaptive coordinate system." When more information is available, for example, the location of six sensors, the estimates can be transformed to the "real-world" coordinate system. This constitutes the third phase.

  5. Tele-autonomous control involving contacts: The applications of a high precision laser line range sensor

    NASA Technical Reports Server (NTRS)

    Volz, R. A.; Shao, L.; Walker, M. W.; Conway, L. A.

    1989-01-01

    The object localization algorithm based on line-segment matching is presented. The method is very simple and computationally fast. In most cases, closed-form formulas are used to derive the solution. The method is also quite flexible, because only few surfaces (one or two) need to be accessed (sensed) to gather necessary range data. For example, if the line-segments are extracted from boundaries of a planar surface, only parameters of one surface and two of its boundaries need to be extracted, as compared with traditional point-surface matching or line-surface matching algorithms which need to access at least three surfaces in order to locate a planar object. Therefore, this method is especially suitable for applications when an object is surrounded by many other work pieces and most of the object is very difficult, is not impossible, to be measured; or when not all parts of the object can be reached. The theoretical ground on how to use line range sensor to located an object was laid. Much work has to be done in order to be really useful.

  6. Improving the Numerical Stability of Fast Matrix Multiplication

    DOE PAGES

    Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...

    2016-10-04

    Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less

  7. Generation of stable PDX derived cell lines using conditional reprogramming.

    PubMed

    Borodovsky, Alexandra; McQuiston, Travis J; Stetson, Daniel; Ahmed, Ambar; Whitston, David; Zhang, Jingwen; Grondine, Michael; Lawson, Deborah; Challberg, Sharon S; Zinda, Michael; Pollok, Brian A; Dougherty, Brian A; D'Cruz, Celina M

    2017-12-06

    Efforts to develop effective cancer therapeutics have been hindered by a lack of clinically predictive preclinical models which recapitulate this complex disease. Patient derived xenograft (PDX) models have emerged as valuable tools for translational research but have several practical limitations including lack of sustained growth in vitro. In this study, we utilized Conditional Reprogramming (CR) cell technology- a novel cell culture system facilitating the generation of stable cultures from patient biopsies- to establish PDX-derived cell lines which maintain the characteristics of the parental PDX tumor. Human lung and ovarian PDX tumors were successfully propagated using CR technology to create stable explant cell lines (CR-PDX). These CR-PDX cell lines maintained parental driver mutations and allele frequency without clonal drift. Purified CR-PDX cell lines were amenable to high throughput chemosensitivity screening and in vitro genetic knockdown studies. Additionally, re-implanted CR-PDX cells proliferated to form tumors that retained the growth kinetics, histology, and drug responses of the parental PDX tumor. CR technology can be used to generate and expand stable cell lines from PDX tumors without compromising fundamental biological properties of the model. It offers the ability to expand PDX cells in vitro for subsequent 2D screening assays as well as for use in vivo to reduce variability, animal usage and study costs. The methods and data detailed here provide a platform to generate physiologically relevant and predictive preclinical models to enhance drug discovery efforts.

  8. On-line implementation of nonlinear parameter estimation for the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Buckland, Julia H.; Musgrave, Jeffrey L.; Walker, Bruce K.

    1992-01-01

    We investigate the performance of a nonlinear estimation scheme applied to the estimation of several parameters in a performance model of the Space Shuttle Main Engine. The nonlinear estimator is based upon the extended Kalman filter which has been augmented to provide estimates of several key performance variables. The estimated parameters are directly related to the efficiency of both the low pressure and high pressure fuel turbopumps. Decreases in the parameter estimates may be interpreted as degradations in turbine and/or pump efficiencies which can be useful measures for an online health monitoring algorithm. This paper extends previous work which has focused on off-line parameter estimation by investigating the filter's on-line potential from a computational standpoint. ln addition, we examine the robustness of the algorithm to unmodeled dynamics. The filter uses a reduced-order model of the engine that includes only fuel-side dynamics. The on-line results produced during this study are comparable to off-line results generated previously. The results show that the parameter estimates are sensitive to dynamics not included in the filter model. Off-line results using an extended Kalman filter with a full order engine model to address the robustness problems of the reduced-order model are also presented.

  9. GARLIC - A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-04-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.

  10. Cone-beam reconstruction for the two-circles-plus-one-line trajectory

    NASA Astrophysics Data System (ADS)

    Lu, Yanbin; Yang, Jiansheng; Emerson, John W.; Mao, Heng; Zhou, Tie; Si, Yuanzheng; Jiang, Ming

    2012-05-01

    The Kodak Image Station In-Vivo FX has an x-ray module with cone-beam configuration for radiographic imaging but lacks the functionality of tomography. To introduce x-ray tomography into the system, we choose the two-circles-plus-one-line trajectory by mounting one translation motor and one rotation motor. We establish a reconstruction algorithm by applying the M-line reconstruction method. Numerical studies and preliminary physical phantom experiment demonstrate the feasibility of the proposed design and reconstruction algorithm.

  11. Ship detection in panchromatic images: a new method and its DSP implementation

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Wang, Mengfei; Meng, Gang

    2016-03-01

    In this paper, a new ship detection method is proposed after analyzing the characteristics of panchromatic remote sensing images and ship targets. Firstly, AdaBoost(Adaptive Boosting) classifiers trained by Haar features are utilized to make coarse detection of ship targets. Then LSD (Line Segment Detector) is adopted to extract the line features in target slices to make fine detection. Experimental results on a dataset of panchromatic remote sensing images with a spatial resolution of 2m show that the proposed algorithm can achieve high detection rate and low false alarm rate. Meanwhile, the algorithm can meet the needs of practical applications on DSP (Digital Signal Processor).

  12. Signature Verification Using N-tuple Learning Machine.

    PubMed

    Maneechot, Thanin; Kitjaidure, Yuttana

    2005-01-01

    This research presents new algorithm for signature verification using N-tuple learning machine. The features are taken from handwritten signature on Digital Tablet (On-line). This research develops recognition algorithm using four features extraction, namely horizontal and vertical pen tip position(x-y position), pen tip pressure, and pen altitude angles. Verification uses N-tuple technique with Gaussian thresholding.

  13. Validation of a Quantitative Single-Subject Based Evaluation for Rehabilitation-Induced Improvement Assessment.

    PubMed

    Gandolla, Marta; Molteni, Franco; Ward, Nick S; Guanziroli, Eleonora; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2015-11-01

    The foreseen outcome of a rehabilitation treatment is a stable improvement on the functional outcomes, which can be longitudinally assessed through multiple measures to help clinicians in functional evaluation. In this study, we propose an automatic comprehensive method of combining multiple measures in order to assess a functional improvement. As test-bed, a functional electrical stimulation based treatment for foot drop correction performed with chronic post-stroke participants is presented. Patients were assessed on five relevant outcome measures before, after intervention, and at a follow-up time-point. A novel algorithm based on variables minimum detectable change is proposed and implemented in a custom-made software, combining the outcome measures to obtain a unique parameter: capacity score. The difference between capacity scores at different timing is three holded to obtain improvement evaluation. Ten clinicians evaluated patients on the Improvement Clinical Global Impression scale. Eleven patients underwent the treatment, and five resulted to achieve a stable functional improvement, as assessed by the proposed algorithm. A statistically significant agreement between intra-clinicians and algorithm-clinicians evaluations was demonstrated. The proposed method evaluates functional improvement on a single-subject yes/no base by merging different measures (e.g., kinematic, muscular) and it is validated against clinical evaluation.

  14. Topology-Scaling Identification of Layered Solids and Stable Exfoliated 2D Materials.

    PubMed

    Ashton, Michael; Paul, Joshua; Sinnott, Susan B; Hennig, Richard G

    2017-03-10

    The Materials Project crystal structure database has been searched for materials possessing layered motifs in their crystal structures using a topology-scaling algorithm. The algorithm identifies and measures the sizes of bonded atomic clusters in a structure's unit cell, and determines their scaling with cell size. The search yielded 826 stable layered materials that are considered as candidates for the formation of two-dimensional monolayers via exfoliation. Density-functional theory was used to calculate the exfoliation energy of each material and 680 monolayers emerge with exfoliation energies below those of already-existent two-dimensional materials. The crystal structures of these two-dimensional materials provide templates for future theoretical searches of stable two-dimensional materials. The optimized structures and other calculated data for all 826 monolayers are provided at our database (https://materialsweb.org).

  15. On-Line Analysis of Southern FIA Data

    Treesearch

    Michael P. Spinney; Paul C. Van Deusen; Francis A. Roesch

    2006-01-01

    The Southern On-Line Estimator (SOLE) is a web-based FIA database analysis tool designed with an emphasis on modularity. The Java-based user interface is simple and intuitive to use and the R-based analysis engine is fast and stable. Each component of the program (data retrieval, statistical analysis and output) can be individually modified to accommodate major...

  16. Optimal line drop compensation parameters under multi-operating conditions

    NASA Astrophysics Data System (ADS)

    Wan, Yuan; Li, Hang; Wang, Kai; He, Zhe

    2017-01-01

    Line Drop Compensation (LDC) is a main function of Reactive Current Compensation (RCC) which is developed to improve voltage stability. While LDC has benefit to voltage, it may deteriorate the small-disturbance rotor angle stability of power system. In present paper, an intelligent algorithm which is combined by Genetic Algorithm (GA) and Backpropagation Neural Network (BPNN) is proposed to optimize parameters of LDC. The objective function proposed in present paper takes consideration of voltage deviation and power system oscillation minimal damping ratio under multi-operating conditions. A simulation based on middle area of Jiangxi province power system is used to demonstrate the intelligent algorithm. The optimization result shows that coordinate optimized parameters can meet the multioperating conditions requirement and improve voltage stability as much as possible while guaranteeing enough damping ratio.

  17. Robust automatic line scratch detection in films.

    PubMed

    Newson, Alasdair; Almansa, Andrés; Gousseau, Yann; Pérez, Patrick

    2014-03-01

    Line scratch detection in old films is a particularly challenging problem due to the variable spatiotemporal characteristics of this defect. Some of the main problems include sensitivity to noise and texture, and false detections due to thin vertical structures belonging to the scene. We propose a robust and automatic algorithm for frame-by-frame line scratch detection in old films, as well as a temporal algorithm for the filtering of false detections. In the frame-by-frame algorithm, we relax some of the hypotheses used in previous algorithms in order to detect a wider variety of scratches. This step's robustness and lack of external parameters is ensured by the combined use of an a contrario methodology and local statistical estimation. In this manner, over-detection in textured or cluttered areas is greatly reduced. The temporal filtering algorithm eliminates false detections due to thin vertical structures by exploiting the coherence of their motion with that of the underlying scene. Experiments demonstrate the ability of the resulting detection procedure to deal with difficult situations, in particular in the presence of noise, texture, and slanted or partial scratches. Comparisons show significant advantages over previous work.

  18. Grounding Lines Detecting Using LANDSAT8 Oli and CRYOSAT-2 Data Fusion

    NASA Astrophysics Data System (ADS)

    Li, F.; Guo, Y.; Zhang, Y.; Zhang, S.

    2018-04-01

    The grounding zone is the region where ice transitions from grounded ice sheet to freely floating ice shelf, grounding lines are actually more of a zone, typically over several kilometers. The mass loss from Antarctica is strongly linked to changes in the ice shelves and their grounding lines, since the variation in the grounding line can result in very rapid changes in glacier and ice-shelf behavior. Based on remote sensing observations, five global Antarctic grounding line products have been released internationally, including MOA, ASAID, ICESat, MEaSUREs, and Synthesized grounding lines. However, the five products could not provide the annual grounding line products of the whole Antarctic, even some products have stopped updating, which limits the time series analysis of Antarctic material balance to a certain extent. Besides, the accurate of single remote-sensing data based grounding line products is far from satisficed. Therefore, we use algorithms to extract grounding lines with SAR and Cryosat-2 data respectively, and combine the results of two kinds of grounding lines to obtain new products, we obtain a mature grounding line extraction algorithm process, so that we can realize the extraction of grounding line of the Antarctic each year in the future. The comparison between fusion results and the MOA product results indicate that there is a maximum deviation of 188.67 meters between the MOA product and the fusion result.

  19. Numerical investigation of field enhancement by metal nano-particles using a hybrid FDTD-PSTD algorithm.

    PubMed

    Pernice, W H; Payne, F P; Gallagher, D F

    2007-09-03

    We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.

  20. Stable Kalman filters for processing clock measurement data

    NASA Technical Reports Server (NTRS)

    Clements, P. A.; Gibbs, B. P.; Vandergraft, J. S.

    1989-01-01

    Kalman filters have been used for some time to process clock measurement data. Due to instabilities in the standard Kalman filter algorithms, the results have been unreliable and difficult to obtain. During the past several years, stable forms of the Kalman filter have been developed, implemented, and used in many diverse applications. These algorithms, while algebraically equivalent to the standard Kalman filter, exhibit excellent numerical properties. Two of these stable algorithms, the Upper triangular-Diagonal (UD) filter and the Square Root Information Filter (SRIF), have been implemented to replace the standard Kalman filter used to process data from the Deep Space Network (DSN) hydrogen maser clocks. The data are time offsets between the clocks in the DSN, the timescale at the National Institute of Standards and Technology (NIST), and two geographically intermediate clocks. The measurements are made by using the GPS navigation satellites in mutual view between clocks. The filter programs allow the user to easily modify the clock models, the GPS satellite dependent biases, and the random noise levels in order to compare different modeling assumptions. The results of this study show the usefulness of such software for processing clock data. The UD filter is indeed a stable, efficient, and flexible method for obtaining optimal estimates of clock offsets, offset rates, and drift rates. A brief overview of the UD filter is also given.

  1. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  2. On-line failure detection and damping measurement of aerospace structures by random decrement signatures

    NASA Technical Reports Server (NTRS)

    Cole, H. A., Jr.

    1973-01-01

    Random decrement signatures of structures vibrating in a random environment are studied through use of computer-generated and experimental data. Statistical properties obtained indicate that these signatures are stable in form and scale and hence, should have wide application in one-line failure detection and damping measurement. On-line procedures are described and equations for estimating record-length requirements to obtain signatures of a prescribed precision are given.

  3. Does the Location of Bruch's Membrane Opening Change Over Time? Longitudinal Analysis Using San Diego Automated Layer Segmentation Algorithm (SALSA).

    PubMed

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Hammel, Naama; Yang, Zhiyong; Weinreb, Robert N; Zangwill, Linda M

    2016-02-01

    We determined if the Bruch's membrane opening (BMO) location changes over time in healthy eyes and eyes with progressing glaucoma, and validated an automated segmentation algorithm for identifying the BMO in Cirrus high-definition coherence tomography (HD-OCT) images. We followed 95 eyes (35 progressing glaucoma and 60 healthy) for an average of 3.7 ± 1.1 years. A stable group of 50 eyes had repeated tests over a short period. In each B-scan of the stable group, the BMO points were delineated manually and automatically to assess the reproducibility of both segmentation methods. Moreover, the BMO location variation over time was assessed longitudinally on the aligned images in 3D space point by point in x, y, and z directions. Mean visual field mean deviation at baseline of the progressing glaucoma group was -7.7 dB. Mixed-effects models revealed small nonsignificant changes in BMO location over time for all directions in healthy eyes (the smallest P value was 0.39) and in the progressing glaucoma eyes (the smallest P value was 0.30). In the stable group, the overall intervisit-intraclass correlation coefficient (ICC) and coefficient of variation (CV) were 98.4% and 2.1%, respectively, for the manual segmentation and 98.1% and 1.9%, respectively, for the automated algorithm. Bruch's membrane opening location was stable in normal and progressing glaucoma eyes with follow-up between 3 and 4 years indicating that it can be used as reference point in monitoring glaucoma progression. The BMO location estimation with Cirrus HD-OCT using manual and automated segmentation showed excellent reproducibility.

  4. Real-time Raman spectroscopy for in vivo, online gastric cancer diagnosis during clinical endoscopic examination.

    PubMed

    Duraipandian, Shiyamala; Sylvest Bergholt, Mads; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Bok Yan So, Jimmy; Shabbir, Asim; Huang, Zhiwei

    2012-08-01

    Optical spectroscopic techniques including reflectance, fluorescence and Raman spectroscopy have shown promising potential for in vivo precancer and cancer diagnostics in a variety of organs. However, data-analysis has mostly been limited to post-processing and off-line algorithm development. In this work, we develop a fully automated on-line Raman spectral diagnostics framework integrated with a multimodal image-guided Raman technique for real-time in vivo cancer detection at endoscopy. A total of 2748 in vivo gastric tissue spectra (2465 normal and 283 cancer) were acquired from 305 patients recruited to construct a spectral database for diagnostic algorithms development. The novel diagnostic scheme developed implements on-line preprocessing, outlier detection based on principal component analysis statistics (i.e., Hotelling's T2 and Q-residuals) for tissue Raman spectra verification as well as for organ specific probabilistic diagnostics using different diagnostic algorithms. Free-running optical diagnosis and processing time of < 0.5 s can be achieved, which is critical to realizing real-time in vivo tissue diagnostics during clinical endoscopic examination. The optimized partial least squares-discriminant analysis (PLS-DA) models based on the randomly resampled training database (80% for learning and 20% for testing) provide the diagnostic accuracy of 85.6% [95% confidence interval (CI): 82.9% to 88.2%] [sensitivity of 80.5% (95% CI: 71.4% to 89.6%) and specificity of 86.2% (95% CI: 83.6% to 88.7%)] for the detection of gastric cancer. The PLS-DA algorithms are further applied prospectively on 10 gastric patients at gastroscopy, achieving the predictive accuracy of 80.0% (60/75) [sensitivity of 90.0% (27/30) and specificity of 73.3% (33/45)] for in vivo diagnosis of gastric cancer. The receiver operating characteristics curves further confirmed the efficacy of Raman endoscopy together with PLS-DA algorithms for in vivo prospective diagnosis of gastric cancer. This work successfully moves biomedical Raman spectroscopic technique into real-time, on-line clinical cancer diagnosis, especially in routine endoscopic diagnostic applications.

  5. Real-time Raman spectroscopy for in vivo, online gastric cancer diagnosis during clinical endoscopic examination

    NASA Astrophysics Data System (ADS)

    Duraipandian, Shiyamala; Sylvest Bergholt, Mads; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Bok Yan So, Jimmy; Shabbir, Asim; Huang, Zhiwei

    2012-08-01

    Optical spectroscopic techniques including reflectance, fluorescence and Raman spectroscopy have shown promising potential for in vivo precancer and cancer diagnostics in a variety of organs. However, data-analysis has mostly been limited to post-processing and off-line algorithm development. In this work, we develop a fully automated on-line Raman spectral diagnostics framework integrated with a multimodal image-guided Raman technique for real-time in vivo cancer detection at endoscopy. A total of 2748 in vivo gastric tissue spectra (2465 normal and 283 cancer) were acquired from 305 patients recruited to construct a spectral database for diagnostic algorithms development. The novel diagnostic scheme developed implements on-line preprocessing, outlier detection based on principal component analysis statistics (i.e., Hotelling's T2 and Q-residuals) for tissue Raman spectra verification as well as for organ specific probabilistic diagnostics using different diagnostic algorithms. Free-running optical diagnosis and processing time of < 0.5 s can be achieved, which is critical to realizing real-time in vivo tissue diagnostics during clinical endoscopic examination. The optimized partial least squares-discriminant analysis (PLS-DA) models based on the randomly resampled training database (80% for learning and 20% for testing) provide the diagnostic accuracy of 85.6% [95% confidence interval (CI): 82.9% to 88.2%] [sensitivity of 80.5% (95% CI: 71.4% to 89.6%) and specificity of 86.2% (95% CI: 83.6% to 88.7%)] for the detection of gastric cancer. The PLS-DA algorithms are further applied prospectively on 10 gastric patients at gastroscopy, achieving the predictive accuracy of 80.0% (60/75) [sensitivity of 90.0% (27/30) and specificity of 73.3% (33/45)] for in vivo diagnosis of gastric cancer. The receiver operating characteristics curves further confirmed the efficacy of Raman endoscopy together with PLS-DA algorithms for in vivo prospective diagnosis of gastric cancer. This work successfully moves biomedical Raman spectroscopic technique into real-time, on-line clinical cancer diagnosis, especially in routine endoscopic diagnostic applications.

  6. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    NASA Astrophysics Data System (ADS)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  7. Follow-up segmentation of lung tumors in PET and CT data

    NASA Astrophysics Data System (ADS)

    Opfer, Roland; Kabus, Sven; Schneider, Torben; Carlsen, Ingwer C.; Renisch, Steffen; Sabczynski, Jörg

    2009-02-01

    Early response assessment of cancer therapy is a crucial component towards a more effective and patient individualized cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with functional information. We have developed algorithms which allow the user to track both tumor volume and standardized uptake value (SUV) measurements during the therapy from series of CT and PET images, respectively. To prepare for tumor volume estimation we have developed a new technique for a fast, flexible, and intuitive 3D definition of meshes. This initial surface is then automatically adapted by means of a model-based segmentation algorithm and propagated to each follow-up scan. If necessary, manual corrections can be added by the user. To determine SUV measurements a prioritized region growing algorithm is employed. For an improved workflow all algorithms are embedded in a PET/CT therapy monitoring software suite giving the clinician a unified and immediate access to all data sets. Whenever the user clicks on a tumor in a base-line scan, the courses of segmented tumor volumes and SUV measurements are automatically identified and displayed to the user as a graph plot. According to each course, the therapy progress can be classified as complete or partial response or as progressive or stable disease. We have tested our methods with series of PET/CT data from 9 lung cancer patients acquired at Princess Margaret Hospital in Toronto. Each patient underwent three PET/CT scans during a radiation therapy. Our results indicate that a combination of mean metabolic activity in the tumor with the PET-based tumor volume can lead to an earlier response detection than a purely volume based (CT diameter) or purely functional based (e.g. SUV max or SUV mean) response measures. The new software seems applicable for easy, faster, and reproducible quantification to routinely monitor tumor therapy.

  8. Voltage scheduling for low power/energy

    NASA Astrophysics Data System (ADS)

    Manzak, Ali

    2001-07-01

    Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.

  9. Research and implementation of finger-vein recognition algorithm

    NASA Astrophysics Data System (ADS)

    Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin

    2017-06-01

    In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.

  10. Decomposing the electromagnetic response of magnetic dipoles to determine the geometric parameters of a dipole conductor

    NASA Astrophysics Data System (ADS)

    Desmarais, Jacques K.; Smith, Richard S.

    2016-03-01

    A novel automatic data interpretation algorithm is presented for modelling airborne electromagnetic (AEM) data acquired over resistive environments, using a single-component (vertical) transmitter, where the position and orientation of a dipole conductor is allowed to vary in three dimensions. The algorithm assumes that the magnetic fields produced from compact vortex currents are expressed as a linear combinations of the fields arising from dipoles in the subsurface oriented parallel to the [1, 0, 0], [0, 1, 0], and [0, 0, 1], unit vectors. In this manner, AEM responses can be represented as 12 terms. The relative size of each term in the decomposition can be used to determine geometrical information about the orientation of the subsurface conductivity structure. The geometrical parameters of the dipole (location, depth, dip, strike) are estimated using a combination of a look-up table and a matrix inverted in a least-squares sense. Tests on 703 synthetic models show that the algorithm is capable of extracting most of the correct geometrical parameters of a dipole conductor when three-component receiver data is included in the interpretation procedure. The algorithm is unstable when the target is perfectly horizontal, as the strike is undefined. Ambiguities may occur in predicting the orientation of the dipole conductor if y-component data is excluded from the analysis. Application of our approach to an anomaly on line 15 of the Reid Mahaffy test site yields geometrical parameters in reasonable agreement with previous authors. However, our algorithm provides additional information on the strike and offset from the traverse line of the conductor. Disparities in the values of predicted dip and depth are within the range of numerical precision. The index of fit was better when strike and offset were included in the interpretation procedure. Tests on the data from line 15701 of the Chibougamau MEGATEM survey shows that the algorithm is applicable to situations where three-component AEM data is available.

  11. Optimal design of the rotor geometry of line-start permanent magnet synchronous motor using the bat algorithm

    NASA Astrophysics Data System (ADS)

    Knypiński, Łukasz

    2017-12-01

    In this paper an algorithm for the optimization of excitation system of line-start permanent magnet synchronous motors will be presented. For the basis of this algorithm, software was developed in the Borland Delphi environment. The software consists of two independent modules: an optimization solver, and a module including the mathematical model of a synchronous motor with a self-start ability. The optimization module contains the bat algorithm procedure. The mathematical model of the motor has been developed in an Ansys Maxwell environment. In order to determine the functional parameters of the motor, additional scripts in Visual Basic language were developed. Selected results of the optimization calculation are presented and compared with results for the particle swarm optimization algorithm.

  12. An ultrashort-pulse reconstruction software: GROG, applied to the FLAME laser system

    NASA Astrophysics Data System (ADS)

    Galletti, Mario

    2016-03-01

    The GRENOUILLE traces of FLAME Probe line pulses (60mJ, 10mJ after compression, 70fs, 1cm FWHM, 10Hz) were acquired in the FLAME Front End Area (FFEA) at the Laboratori Nazionali di Frascati (LNF), Instituto Nazionale di Fisica Nucleare (INFN). The complete characterization of the laser pulse parameters was made using a new algorithm: GRenouille/FrOG (GROG). A characterization with a commercial algorithm, QUICKFrog, was also made. The temporal and spectral parameters came out to be in great agreement for the two kinds of algorithms. In this experimental campaign the Probe line of FLAME has been completely characterized and it has been showed how GROG, the developed algorithm, works as well as QuickFrog algorithm with this type of pulse class.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/ormore » line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.« less

  14. Online aptitude automatic surface quality inspection system for hot rolled strips steel

    NASA Astrophysics Data System (ADS)

    Lin, Jin; Xie, Zhi-jiang; Wang, Xue; Sun, Nan-Nan

    2005-12-01

    Defects on the surface of hot rolled steel strips are main factors to evaluate quality of steel strips, an improved image recognition algorithm are used to extract the feature of Defects on the surface of steel strips. Base on the Machine vision and Artificial Neural Networks, establish a defect recognition method to select defect on the surface of steel strips. Base on these research. A surface inspection system and advanced algorithms for image processing to hot rolled strips is developed. Preparing two different fashion to lighting, adopting line blast vidicon of CCD on the surface steel strips on-line. Opening up capacity-diagnose-system with level the surface of steel strips on line, toward the above and undersurface of steel strips with ferric oxide, injure, stamp etc of defects on the surface to analyze and estimate. Miscarriage of justice and alternate of justice rate not preponderate over 5%.Geting hold of applications on some big enterprises of steel at home. Experiment proved that this measure is feasible and effective.

  15. Research of grasping algorithm based on scara industrial robot

    NASA Astrophysics Data System (ADS)

    Peng, Tao; Zuo, Ping; Yang, Hai

    2018-04-01

    As the tobacco industry grows, facing the challenge of the international tobacco giant, efficient logistics service is one of the key factors. How to complete the tobacco sorting task of efficient economy is the goal of tobacco sorting and optimization research. Now the cigarette distribution system uses a single line to carry out the single brand sorting task, this article adopts a single line to realize the cigarette sorting task of different brands. Using scara robot special algorithm for sorting and packaging, the optimization scheme significantly enhances the indicators of smoke sorting system. Saving labor productivity, obviously improve production efficiency.

  16. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  17. Production of lentiviral vectors

    PubMed Central

    Merten, Otto-Wilhelm; Hebben, Matthias; Bovolenta, Chiara

    2016-01-01

    Lentiviral vectors (LV) have seen considerably increase in use as gene therapy vectors for the treatment of acquired and inherited diseases. This review presents the state of the art of the production of these vectors with particular emphasis on their large-scale production for clinical purposes. In contrast to oncoretroviral vectors, which are produced using stable producer cell lines, clinical-grade LV are in most of the cases produced by transient transfection of 293 or 293T cells grown in cell factories. However, more recent developments, also, tend to use hollow fiber reactor, suspension culture processes, and the implementation of stable producer cell lines. As is customary for the biotech industry, rather sophisticated downstream processing protocols have been established to remove any undesirable process-derived contaminant, such as plasmid or host cell DNA or host cell proteins. This review compares published large-scale production and purification processes of LV and presents their process performances. Furthermore, developments in the domain of stable cell lines and their way to the use of production vehicles of clinical material will be presented. PMID:27110581

  18. Computational Aerothermodynamics in Aeroassist Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    Aeroassisted planetary entry uses atmospheric drag to decelerate spacecraft from super-orbital to orbital or suborbital velocities. Numerical simulation of flow fields surrounding these spacecraft during hypersonic atmospheric entry is required to define aerothermal loads. The severe compression in the shock layer in front of the vehicle and subsequent, rapid expansion into the wake are characterized by high temperature, thermo-chemical nonequilibrium processes. Implicit algorithms required for efficient, stable computation of the governing equations involving disparate time scales of convection, diffusion, chemical reactions, and thermal relaxation are discussed. Robust point-implicit strategies are utilized in the initialization phase; less robust but more efficient line-implicit strategies are applied in the endgame. Applications to ballutes (balloon-like decelerators) in the atmospheres of Venus, Mars, Titan, Saturn, and Neptune and a Mars Sample Return Orbiter (MSRO) are featured. Examples are discussed where time-accurate simulation is required to achieve a steady-state solution.

  19. Segmentation and classification of cell cycle phases in fluorescence imaging.

    PubMed

    Ersoy, Ilker; Bunyak, Filiz; Chagin, Vadim; Cardoso, M Christina; Palaniappan, Kannappan

    2009-01-01

    Current chemical biology methods for studying spatiotemporal correlation between biochemical networks and cell cycle phase progression in live-cells typically use fluorescence-based imaging of fusion proteins. Stable cell lines expressing fluorescently tagged protein GFP-PCNA produce rich, dynamically varying sub-cellular foci patterns characterizing the cell cycle phases, including the progress during the S-phase. Variable fluorescence patterns, drastic changes in SNR, shape and position changes and abundance of touching cells require sophisticated algorithms for reliable automatic segmentation and cell cycle classification. We extend the recently proposed graph partitioning active contours (GPAC) for fluorescence-based nucleus segmentation using regional density functions and dramatically improve its efficiency, making it scalable for high content microscopy imaging. We utilize surface shape properties of GFP-PCNA intensity field to obtain descriptors of foci patterns and perform automated cell cycle phase classification, and give quantitative performance by comparing our results to manually labeled data.

  20. TIGER: A graphically interactive grid system for turbomachinery applications

    NASA Technical Reports Server (NTRS)

    Shih, Ming-Hsin; Soni, Bharat K.

    1992-01-01

    Numerical grid generation algorithm associated with the flow field about turbomachinery geometries is presented. Graphical user interface is developed with FORMS Library to create an interactive, user-friendly working environment. This customized algorithm reduces the man-hours required to generate a grid associated with turbomachinery geometry, as compared to the use of general-purpose grid generation softwares. Bezier curves are utilized both interactively and automatically to accomplish grid line smoothness and orthogonality. Graphical User Interactions are provided in the algorithm, allowing the user to design and manipulate the grid lines with a mouse.

  1. Algorithm Summary and Evaluation: Automatic Implementation of Ringdown Analysis for Electromechanical Mode Identification from Phasor Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.

    2010-02-28

    Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper develops a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detecting ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliablymore » and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis. In addition, the proposed method is applied to field measurement data from WECC to show the performance of the proposed algorithm.« less

  2. Design of rapid prototype of UAV line-of-sight stabilized control system

    NASA Astrophysics Data System (ADS)

    Huang, Gang; Zhao, Liting; Li, Yinlong; Yu, Fei; Lin, Zhe

    2018-01-01

    The line-of-sight (LOS) stable platform is the most important technology of UAV (unmanned aerial vehicle), which can reduce the effect to imaging quality from vibration and maneuvering of the aircraft. According to the requirement of LOS stability system (inertial and optical-mechanical combined method) and UAV's structure, a rapid prototype is designed using based on industrial computer using Peripheral Component Interconnect (PCI) and Windows RTX to exchange information. The paper shows the control structure, and circuit system including the inertial stability control circuit with gyro and voice coil motor driven circuit, the optical-mechanical stability control circuit with fast-steering-mirror (FSM) driven circuit and image-deviation-obtained system, outer frame rotary follower, and information-exchange system on PC. Test results show the stability accuracy reaches 5μrad, and prove the effectiveness of the combined line-of-sight stabilization control system, and the real-time rapid prototype runs stable.

  3. Multi-Optimisation Consensus Clustering

    NASA Astrophysics Data System (ADS)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  4. Longitudinal Control for Mengshi Autonomous Vehicle via Cloud Model

    NASA Astrophysics Data System (ADS)

    Gao, H. B.; Zhang, X. Y.; Li, D. Y.; Liu, Y. C.

    2018-03-01

    Dynamic robustness and stability control is a requirement for self-driving of autonomous vehicle. Longitudinal control method of autonomous is a key technique which has drawn the attention of industry and academe. In this paper, we present a longitudinal control algorithm based on cloud model for Mengshi autonomous vehicle to ensure the dynamic stability and tracking performance of Mengshi autonomous vehicle. An experiments is applied to test the implementation of the longitudinal control algorithm. Empirical results show that if the longitudinal control algorithm based Gauss cloud model are applied to calculate the acceleration, and the vehicles drive at different speeds, a stable longitudinal control effect is achieved.

  5. A provisional effective evaluation when errors are present in independent variables

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.

    1983-01-01

    Algorithms are examined for evaluating the parameters of a regression model when there are errors in the independent variables. The algorithms are fast and the estimates they yield are stable with respect to the correlation of errors and measurements of both the dependent variable and the independent variables.

  6. Research of digital controlled DC/DC converter based on STC12C5410AD

    NASA Astrophysics Data System (ADS)

    Chen, Dan-Jiang; Jin, Xin; Xiao, Zhi-Hong

    2010-02-01

    In order to study application of digital control technology on DC/DC converter, principle of increment mode PID control algorithm was analyzed in the paper. Then, a SCM named STC12C5410AD was introduced with its internal resources and characteristics. The PID control algorithm can be implemented easily based on it. The output of PID control was used to change the value of a variable that is 255 times than duty cycle, and this reduced the error of calculation. The valid of the presented algorithm was verified by an experiment for a BUCK DC/DC converter. The experimental results indicated that output voltage of the BUCK converter is stable with low ripple.

  7. Jupiter’s tropospheric composition and cloud structure from high-resolution ground-based spectroscopy

    NASA Astrophysics Data System (ADS)

    Giles, Rohini Sara; Fletcher, Leigh N.; Irwin, Patrick G. J.

    2015-11-01

    The CRIRES instrument on the Very Large Telescope was used to make high-resolution (R=100,000) observations of Jupiter in the 4.5-5.2 μm spectral range. At these wavelengths, Jupiter’s atmosphere is optically thin and the spectra are sensitive to the 4-8 bar region. This enabled us to spectrally resolve the line shapes of four minor species in Jupiter’s troposphere: CH3D, GeH4, AsH3 and PH3. The slit was aligned north-south along Jupiter’s central meridian, allowing us to search for latitudinal variability in these line shapes. The spectra were analysed using the NEMESIS radiative transfer code and retrieval algorithm.The CH3D line shape is narrower in the cool zones than in the warm belts. CH3D is chemically stable and does not condense in Jupiter’s atmosphere, so this difference cannot be due to variations in the CH3D abundance. Instead, it can be modelled as variations in the opacity of a deep cloud located at around 4 bar. This deep cloud is opaque in the zones and transparent in the belts.We also observe variability in the GeH4 line shape, with stronger absorption features in the belts than in the zones. As a disequilibrium species, GeH4 is expected to vary with latitude, but we found that the variations in the line shape could be entirely explained by the variations in the cloud structure.In contrast, there is clear evidence for spatial variability in the remaining two molecular species, AsH3 and PH3. Their absorption features are weak near the equator and significantly stronger at high latitudes. A full latitudinal retrieval leads to a broadly symmetric profile for both species, with a minimum at the equator and an enhancement towards the poles.

  8. Agent-based station for on-line diagnostics by self-adaptive laser Doppler vibrometry

    NASA Astrophysics Data System (ADS)

    Serafini, S.; Paone, N.; Castellini, P.

    2013-12-01

    A self-adaptive diagnostic system based on laser vibrometry is proposed for quality control of mechanical defects by vibration testing; it is developed for appliances at the end of an assembly line, but its characteristics are generally suited for testing most types of electromechanical products. It consists of a laser Doppler vibrometer, equipped with scanning mirrors and a camera, which implements self-adaptive bahaviour for optimizing the measurement. The system is conceived as a Quality Control Agent (QCA) and it is part of a Multi Agent System that supervises all the production line. The QCA behaviour is defined so to minimize measurement uncertainty during the on-line tests and to compensate target mis-positioning under guidance of a vision system. Best measurement conditions are reached by maximizing the amplitude of the optical Doppler beat signal (signal quality) and consequently minimize uncertainty. In this paper, the optimization strategy for measurement enhancement achieved by the down-hill algorithm (Nelder-Mead algorithm) and its effect on signal quality improvement is discussed. Tests on a washing machine in controlled operating conditions allow to evaluate the efficacy of the method; significant reduction of noise on vibration velocity spectra is observed. Results from on-line tests are presented, which demonstrate the potential of the system for industrial quality control.

  9. RBoost: Label Noise-Robust Boosting Algorithm Based on a Nonconvex Loss Function and the Numerically Stable Base Learners.

    PubMed

    Miao, Qiguang; Cao, Ying; Xia, Ge; Gong, Maoguo; Liu, Jiachen; Song, Jianfeng

    2016-11-01

    AdaBoost has attracted much attention in the machine learning community because of its excellent performance in combining weak classifiers into strong classifiers. However, AdaBoost tends to overfit to the noisy data in many applications. Accordingly, improving the antinoise ability of AdaBoost plays an important role in many applications. The sensitiveness to the noisy data of AdaBoost stems from the exponential loss function, which puts unrestricted penalties to the misclassified samples with very large margins. In this paper, we propose two boosting algorithms, referred to as RBoost1 and RBoost2, which are more robust to the noisy data compared with AdaBoost. RBoost1 and RBoost2 optimize a nonconvex loss function of the classification margin. Because the penalties to the misclassified samples are restricted to an amount less than one, RBoost1 and RBoost2 do not overfocus on the samples that are always misclassified by the previous base learners. Besides the loss function, at each boosting iteration, RBoost1 and RBoost2 use numerically stable ways to compute the base learners. These two improvements contribute to the robustness of the proposed algorithms to the noisy training and testing samples. Experimental results on the synthetic Gaussian data set, the UCI data sets, and a real malware behavior data set illustrate that the proposed RBoost1 and RBoost2 algorithms perform better when the training data sets contain noisy data.

  10. Fuzzy automata and pattern matching

    NASA Technical Reports Server (NTRS)

    Setzer, C. B.; Warsi, N. A.

    1986-01-01

    A wide-ranging search for articles and books concerned with fuzzy automata and syntactic pattern recognition is presented. A number of survey articles on image processing and feature detection were included. Hough's algorithm is presented to illustrate the way in which knowledge about an image can be used to interpret the details of the image. It was found that in hand generated pictures, the algorithm worked well on following the straight lines, but had great difficulty turning corners. An algorithm was developed which produces a minimal finite automaton recognizing a given finite set of strings. One difficulty of the construction is that, in some cases, this minimal automaton is not unique for a given set of strings and a given maximum length. This algorithm compares favorably with other inference algorithms. More importantly, the algorithm produces an automaton with a rigorously described relationship to the original set of strings that does not depend on the algorithm itself.

  11. On-line Robot Adaptation to Environmental Change

    DTIC Science & Technology

    2005-08-01

    by the Department of the Interior under contract no. NBCH1040007, the US Army under contract no. DABT639910013, the US Air Force Research Laboratory...Probable Series Predictor algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.2 Accuracy of PSC in various test classification tasks...105 6.1 Probable Series Predictor algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.2 Accuracy of PSC in

  12. Overlapping communities detection based on spectral analysis of line graphs

    NASA Astrophysics Data System (ADS)

    Gui, Chun; Zhang, Ruisheng; Hu, Rongjing; Huang, Guoming; Wei, Jiaxuan

    2018-05-01

    Community in networks are often overlapping where one vertex belongs to several clusters. Meanwhile, many networks show hierarchical structure such that community is recursively grouped into hierarchical organization. In order to obtain overlapping communities from a global hierarchy of vertices, a new algorithm (named SAoLG) is proposed to build the hierarchical organization along with detecting the overlap of community structure. SAoLG applies the spectral analysis into line graphs to unify the overlap and hierarchical structure of the communities. In order to avoid the limitation of absolute distance such as Euclidean distance, SAoLG employs Angular distance to compute the similarity between vertices. Furthermore, we make a micro-improvement partition density to evaluate the quality of community structure and use it to obtain the more reasonable and sensible community numbers. The proposed SAoLG algorithm achieves a balance between overlap and hierarchy by applying spectral analysis to edge community detection. The experimental results on one standard network and six real-world networks show that the SAoLG algorithm achieves higher modularity and reasonable community number values than those generated by Ahn's algorithm, the classical CPM and GN ones.

  13. Fast algorithm for spectral processing with application to on-line welding quality assurance

    NASA Astrophysics Data System (ADS)

    Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.

    2006-10-01

    A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.

  14. Some design guidelines for discrete-time adaptive controllers

    NASA Technical Reports Server (NTRS)

    Rohrs, C. E.; Athans, M.; Valavani, L.; Stein, G.

    1985-01-01

    There have been many algorithms proposed for adaptive control which will provide globally asymptotically stable controllers if some stringent conditions on the plant are met. The conditions on the plant cannot be met in practice as all plants will contain high frequency unmolded dynamics therefore, blind implementation of the published algorithms can lead to disastrous results. This paper uses a linearization analysis of a non-linear adaptive controller to demonstrate analytically design guidelines which aleviate some of the problems associated with adaptive control in the presence of unmodeled dynamics.

  15. Stable Algorithm For Estimating Airdata From Flush Surface Pressure Measurements

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen, A. (Inventor); Cobleigh, Brent R. (Inventor); Haering, Edward A., Jr. (Inventor)

    2001-01-01

    An airdata estimation and evaluation system and method, including a stable algorithm for estimating airdata from nonintrusive surface pressure measurements. The airdata estimation and evaluation system is preferably implemented in a flush airdata sensing (FADS) system. The system and method of the present invention take a flow model equation and transform it into a triples formulation equation. The triples formulation equation eliminates the pressure related states from the flow model equation by strategically taking the differences of three surface pressures, known as triples. This triples formulation equation is then used to accurately estimate and compute vital airdata from nonintrusive surface pressure measurements.

  16. AdaBoost-based on-line signature verifier

    NASA Astrophysics Data System (ADS)

    Hongo, Yasunori; Muramatsu, Daigo; Matsumoto, Takashi

    2005-03-01

    Authentication of individuals is rapidly becoming an important issue. The authors previously proposed a Pen-input online signature verification algorithm. The algorithm considers a writer"s signature as a trajectory of pen position, pen pressure, pen azimuth, and pen altitude that evolve over time, so that it is dynamic and biometric. Many algorithms have been proposed and reported to achieve accuracy for on-line signature verification, but setting the threshold value for these algorithms is a problem. In this paper, we introduce a user-generic model generated by AdaBoost, which resolves this problem. When user- specific models (one model for each user) are used for signature verification problems, we need to generate the models using only genuine signatures. Forged signatures are not available because imposters do not give forged signatures for training in advance. However, we can make use of another's forged signature in addition to the genuine signatures for learning by introducing a user generic model. And Adaboost is a well-known classification algorithm, making final decisions depending on the sign of the output value. Therefore, it is not necessary to set the threshold value. A preliminary experiment is performed on a database consisting of data from 50 individuals. This set consists of western-alphabet-based signatures provide by a European research group. In this experiment, our algorithm gives an FRR of 1.88% and an FAR of 1.60%. Since no fine-tuning was done, this preliminary result looks very promising.

  17. Distributed Database Control and Allocation. Volume 1. Frameworks for Understanding Concurrency Control and Recovery Algorithms.

    DTIC Science & Technology

    1983-10-01

    an Aborti , It forwards the operation directly to the recovery system. When the recovery system acknowledges that the operation has been processed, the...list... AbortI . rite Ti Into the abort list. Then undo all of Ti’s writes by reedina their bet ore-images from the audit trail and writin. them back...Into the stable database. [Ack) Then, delete Ti from the active list. Restart. Process Aborti for each Ti on the active list. Ack) In this algorithm

  18. a New Graduation Algorithm for Color Balance of Remote Sensing Image

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Liu, X.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Pan, Q.

    2018-05-01

    In order to expand the field of view to obtain more data and information when doing research on remote sensing image, workers always need to mosaicking images together. However, the image after mosaic always has the large color differences and produces the gap line. This paper based on the graduation algorithm of tarigonometric function proposed a new algorithm of Two Quarter-rounds Curves (TQC). The paper uses the Gaussian filter to solve the program about the image color noise and the gap line. The paper used one of Greenland compiled data acquired in 1963 from Declassified Intelligence Photography Project (DISP) by ARGON KH-5 satellite, and used the photography of North Gulf, China, by Landsat satellite to experiment. The experimental results show that the proposed method has improved the accuracy of the results in two parts: on the one hand, for the large color differences remote sensing image will become more balanced. On the other hands, the remote sensing image will achieve more smooth transition.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skala, Vaclav

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less

  20. A Cross Structured Light Sensor and Stripe Segmentation Method for Visual Tracking of a Wall Climbing Robot

    PubMed Central

    Zhang, Liguo; Sun, Jianguo; Yin, Guisheng; Zhao, Jing; Han, Qilong

    2015-01-01

    In non-destructive testing (NDT) of metal welds, weld line tracking is usually performed outdoors, where the structured light sources are always disturbed by various noises, such as sunlight, shadows, and reflections from the weld line surface. In this paper, we design a cross structured light (CSL) to detect the weld line and propose a robust laser stripe segmentation algorithm to overcome the noises in structured light images. An adaptive monochromatic space is applied to preprocess the image with ambient noises. In the monochromatic image, the laser stripe obtained is recovered as a multichannel signal by minimum entropy deconvolution. Lastly, the stripe centre points are extracted from the image. In experiments, the CSL sensor and the proposed algorithm are applied to guide a wall climbing robot inspecting the weld line of a wind power tower. The experimental results show that the CSL sensor can capture the 3D information of the welds with high accuracy, and the proposed algorithm contributes to the weld line inspection and the robot navigation. PMID:26110403

  1. On-Line Algorithms and Reverse Mathematics

    NASA Astrophysics Data System (ADS)

    Harris, Seth

    In this thesis, we classify the reverse-mathematical strength of sequential problems. If we are given a problem P of the form ∀X(alpha(X) → ∃Zbeta(X,Z)) then the corresponding sequential problem, SeqP, asserts the existence of infinitely many solutions to P: ∀X(∀nalpha(Xn) → ∃Z∀nbeta(X n,Zn)). P is typically provable in RCA0 if all objects involved are finite. SeqP, however, is only guaranteed to be provable in ACA0. In this thesis we exactly characterize which sequential problems are equivalent to RCA0, WKL0, or ACA0.. We say that a problem P is solvable by an on-line algorithm if P can be solved according to a two-player game, played by Alice and Bob, in which Bob has a winning strategy. Bob wins the game if Alice's sequence of plays 〈a0, ..., ak〉 and Bob's sequence of responses 〈 b0, ..., bk〉 constitute a solution to P. Formally, an on-line algorithm A is a function that inputs an admissible sequence of plays 〈a 0, b0, ..., aj〉 and outputs a new play bj for Bob. (This differs from the typical definition of "algorithm", though quite often a concrete set of instructions can be easily deduced from A.). We show that SeqP is provable in RCA0 precisely when P is solvable by an on-line algorithm. Schmerl proved this result specifically for the graph coloring problem; we generalize Schmerl's result to any problem that is on-line solvable. To prove our separation, we introduce a principle called Predictk(r) that is equivalent to -WKL0 for standard k, r.. We show that WKL0 is sufficient to prove SeqP precisely when P has a solvable closed kernel. This means that a solution exists, and each initial segment of this solution is a solution to the corresponding initial segment of the problem. (Certain bounding conditions are necessary as well.) If no such solution exists, then SeqP is equivalent to ACA0 over RCA 0 + ISigma02; RCA0 alone suffices if only sequences of standard length are considered. We use different techniques from Schmerl to prove this separation, and in the process we improve some of Schmerl's results on Grundy colorings. In Chapter 4 we analyze a variety of applications, classifying their sequential forms by reverse-mathematical strength. This builds upon similar work by Dorais and Hirst and Mummert. We consider combinatorial applications such as matching problems and Dilworth's theorems, and we also consider classic algorithms such as the task scheduling and paging problems. Tables summarizing our findings can be found at the end of Chapter 4.

  2. Image restoration by minimizing zero norm of wavelet frame coefficients

    NASA Astrophysics Data System (ADS)

    Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue

    2016-11-01

    In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.

  3. Study on feed forward neural network convex optimization for LiFePO4 battery parameters

    NASA Astrophysics Data System (ADS)

    Liu, Xuepeng; Zhao, Dongmei

    2017-08-01

    Based on the modern facility agriculture automatic walking equipment LiFePO4 Battery, the parameter identification of LiFePO4 Battery is analyzed. An improved method for the process model of li battery is proposed, and the on-line estimation algorithm is presented. The parameters of the battery are identified using feed forward network neural convex optimization algorithm.

  4. Estimation of Bridge Height over Water from Polarimetric SAR Image Data Using Mapping and Projection Algorithm and De-Orientation Theory

    NASA Astrophysics Data System (ADS)

    Wang, Haipeng; Xu, Feng; Jin, Ya-Qiu; Ouchi, Kazuo

    An inversion method of bridge height over water by polarimetric synthetic aperture radar (SAR) is developed. A geometric ray description to illustrate scattering mechanism of a bridge over water surface is identified by polarimetric image analysis. Using the mapping and projecting algorithm, a polarimetric SAR image of a bridge model is first simulated and shows that scattering from a bridge over water can be identified by three strip lines corresponding to single-, double-, and triple-order scattering, respectively. A set of polarimetric parameters based on the de-orientation theory is applied to analysis of three types scattering, and the thinning-clustering algorithm and Hough transform are then employed to locate the image positions of these strip lines. These lines are used to invert the bridge height. Fully polarimetric image data of airborne Pi-SAR at X-band are applied to inversion of the height and width of the Naruto Bridge in Japan. Based on the same principle, this approach is also applicable to spaceborne ALOSPALSAR single-polarization data of the Eastern Ocean Bridge in China. The results show good feasibility to realize the bridge height inversion.

  5. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Rapid development of stable transgene CHO cell lines by CRISPR/Cas9-mediated site-specific integration into C12orf35.

    PubMed

    Zhao, Menglin; Wang, Jiaxian; Luo, Manyu; Luo, Han; Zhao, Meiqi; Han, Lei; Zhang, Mengxiao; Yang, Hui; Xie, Yueqing; Jiang, Hua; Feng, Lei; Lu, Huili; Zhu, Jianwei

    2018-07-01

    Chinese hamster ovary (CHO) cells are the most widely used mammalian hosts for recombinant protein production. However, by conventional random integration strategy, development of a high-expressing and stable recombinant CHO cell line has always been a difficult task due to the heterogenic insertion and its caused requirement of multiple rounds of selection. Site-specific integration of transgenes into CHO hot spots is an ideal strategy to overcome these challenges since it can generate isogenic cell lines with consistent productivity and stability. In this study, we investigated three sites with potential high transcriptional activities: C12orf35, HPRT, and GRIK1, to determine the possible transcriptional hot spots in CHO cells, and further construct a reliable site-specific integration strategy to develop recombinant cell lines efficiently. Genes encoding representative proteins mCherry and anti-PD1 monoclonal antibody were targeted into these three loci respectively through CRISPR/Cas9 technology. Stable cell lines were generated successfully after a single round of selection. In comparison with a random integration control, all the targeted integration cell lines showed higher productivity, among which C12orf35 locus was the most advantageous in both productivity and cell line stability. Binding affinity and N-glycan analysis of the antibody revealed that all batches of product were of similar quality independent on integrated sites. Deep sequencing demonstrated that there was low level of off-target mutations caused by CRISPR/Cas9, but none of them contributed to the development process of transgene cell lines. Our results demonstrated the feasibility of C12orf35 as the target site for exogenous gene integration, and strongly suggested that C12orf35 targeted integration mediated by CRISPR/Cas9 is a reliable strategy for the rapid development of recombinant CHO cell lines.

  7. Movie denoising by average of warped lines.

    PubMed

    Bertalmío, Marcelo; Caselles, Vicent; Pardo, Alvaro

    2007-09-01

    Here, we present an efficient method for movie denoising that does not require any motion estimation. The method is based on the well-known fact that averaging several realizations of a random variable reduces the variance. For each pixel to be denoised, we look for close similar samples along the level surface passing through it. With these similar samples, we estimate the denoised pixel. The method to find close similar samples is done via warping lines in spatiotemporal neighborhoods. For that end, we present an algorithm based on a method for epipolar line matching in stereo pairs which has per-line complexity O (N), where N is the number of columns in the image. In this way, when applied to the image sequence, our algorithm is computationally efficient, having a complexity of the order of the total number of pixels. Furthermore, we show that the presented method is unsupervised and is adapted to denoise image sequences with an additive white noise while respecting the visual details on the movie frames. We have also experimented with other types of noise with satisfactory results.

  8. On-line monitoring of extraction process of Flos Lonicerae Japonicae using near infrared spectroscopy combined with synergy interval PLS and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yue; Wang, Lei; Wu, Yongjiang; Liu, Xuesong; Bi, Yuan; Xiao, Wei; Chen, Yong

    2017-07-01

    There is a growing need for the effective on-line process monitoring during the manufacture of traditional Chinese medicine to ensure quality consistency. In this study, the potential of near infrared (NIR) spectroscopy technique to monitor the extraction process of Flos Lonicerae Japonicae was investigated. A new algorithm of synergy interval PLS with genetic algorithm (Si-GA-PLS) was proposed for modeling. Four different PLS models, namely Full-PLS, Si-PLS, GA-PLS, and Si-GA-PLS, were established, and their performances in predicting two quality parameters (viz. total acid and soluble solid contents) were compared. In conclusion, Si-GA-PLS model got the best results due to the combination of superiority of Si-PLS and GA. For Si-GA-PLS, the determination coefficient (Rp2) and root-mean-square error for the prediction set (RMSEP) were 0.9561 and 147.6544 μg/ml for total acid, 0.9062 and 0.1078% for soluble solid contents, correspondingly. The overall results demonstrated that the NIR spectroscopy technique combined with Si-GA-PLS calibration is a reliable and non-destructive alternative method for on-line monitoring of the extraction process of TCM on the production scale.

  9. Neural Activity Associated with Visual Search for Line Drawings on AAC Displays: An Exploration of the Use of fMRI.

    PubMed

    Wilkinson, Krista M; Dennis, Nancy A; Webb, Christina E; Therrien, Mari; Stradtman, Megan; Farmer, Jacquelyn; Leach, Raevynn; Warrenfeltz, Megan; Zeuner, Courtney

    2015-01-01

    Visual aided augmentative and alternative communication (AAC) consists of books or technologies that contain visual symbols to supplement spoken language. A common observation concerning some forms of aided AAC is that message preparation can be frustratingly slow. We explored the uses of fMRI to examine the neural correlates of visual search for line drawings on AAC displays in 18 college students under two experimental conditions. Under one condition, the location of the icons remained stable and participants were able to learn the spatial layout of the display. Under the other condition, constant shuffling of the locations of the icons prevented participants from learning the layout, impeding rapid search. Brain activation was contrasted under these conditions. Rapid search in the stable display was associated with greater activation of cortical and subcortical regions associated with memory, motor learning, and dorsal visual pathways compared to the search in the unpredictable display. Rapid search for line drawings on stable AAC displays involves not just the conceptual knowledge of the symbol meaning but also the integration of motor, memory, and visual-spatial knowledge about the display layout. Further research must study individuals who use AAC, as well as the functional effect of interventions that promote knowledge about array layout.

  10. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  11. Chinese hamster ovary K1 host cell enables stable cell line development for antibody molecules which are difficult to express in DUXB11-derived dihydrofolate reductase deficient host cell.

    PubMed

    Hu, Zhilan; Guo, Donglin; Yip, Shirley S M; Zhan, Dejin; Misaghi, Shahram; Joly, John C; Snedecor, Bradley R; Shen, Amy Y

    2013-01-01

    Therapeutic monoclonal antibodies (mAb) are often produced in Chinese hamster ovary (CHO) cells. Three commonly used CHO host cells for generating stable cell lines to produce therapeutic proteins are dihydrofolate reductase (DHFR) positive CHOK1, DHFR-deficient DG44, and DUXB11-based DHFR deficient CHO. Current Genentech commercial full-length antibody products have all been produced in the DUXB11-derived DHFR-deficient CHO host. However, it has been challenging to develop stable cell lines producing an appreciable amount of antibody proteins in the DUXB11-derived DHFR-deficient CHO host for some antibody molecules and the CHOK1 host has been explored as an alternative approach. In this work, stable cell lines were developed for three antibody molecules in both DUXB11-based and CHOK1 hosts. Results have shown that the best CHOK1 clones produce about 1 g/l for an antibody mAb1 and about 4 g/l for an antibody mAb2 in 14-day fed batch cultures in shake flasks. In contrast, the DUXB11-based host produced ∼0.1 g/l for both antibodies in the same 14-day fed batch shake flask production experiments. For an antibody mAb3, both CHOK1 and DUXB11 host cells can generate stable cell lines with the best clone in each host producing ∼2.5 g/l. Additionally, studies have shown that the CHOK1 host cell has a larger endoplasmic reticulum and higher mitochondrial mass. © 2013 American Institute of Chemical Engineers.

  12. Visual identification and similarity measures used for on-line motion planning of autonomous robots in unknown environments

    NASA Astrophysics Data System (ADS)

    Martínez, Fredy; Martínez, Fernando; Jacinto, Edwar

    2017-02-01

    In this paper we propose an on-line motion planning strategy for autonomous robots in dynamic and locally observable environments. In this approach, we first visually identify geometric shapes in the environment by filtering images. Then, an ART-2 network is used to establish the similarity between patterns. The proposed algorithm allows that a robot establish its relative location in the environment, and define its navigation path based on images of the environment and its similarity to reference images. This is an efficient and minimalist method that uses the similarity of landmark view patterns to navigate to the desired destination. Laboratory tests on real prototypes demonstrate the performance of the algorithm.

  13. Load-Flow in Multiphase Distribution Networks: Existence, Uniqueness, Non-Singularity, and Linear Models

    DOE PAGES

    Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano; ...

    2018-01-01

    This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less

  14. Full range line-field parallel swept source imaging utilizing digital refocusing

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-12-01

    We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.

  15. MAST Propellant and Delivery System Design Methods

    NASA Technical Reports Server (NTRS)

    Nadeem, Uzair; Mc Cleskey, Carey M.

    2015-01-01

    A Mars Aerospace Taxi (MAST) concept and propellant storage and delivery case study is undergoing investigation by NASA's Element Design and Architectural Impact (EDAI) design and analysis forum. The MAST lander concept envisions landing with its ascent propellant storage tanks empty and supplying these reusable Mars landers with propellant that is generated and transferred while on the Mars surface. The report provides an overview of the data derived from modeling between different methods of propellant line routing (or "lining") and differentiate the resulting design and operations complexity of fluid and gaseous paths based on a given set of fluid sources and destinations. The EDAI team desires a rough-order-magnitude algorithm for estimating the lining characteristics (i.e., the plumbing mass and complexity) associated different numbers of vehicle propellant sources and destinations. This paper explored the feasibility of preparing a mathematically sound algorithm for this purpose, and offers a method for the EDAI team to implement.

  16. Load-Flow in Multiphase Distribution Networks: Existence, Uniqueness, Non-Singularity, and Linear Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano

    This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less

  17. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    NASA Astrophysics Data System (ADS)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  18. Stable Stratification Effects on Flow and Pollutant Dispersion in Boundary Layers Entering a Generic Urban Environment

    NASA Astrophysics Data System (ADS)

    Tomas, J. M.; Pourquie, M. J. B. M.; Jonker, H. J. J.

    2016-05-01

    Large-eddy simulations (LES) are used to investigate the effect of stable stratification on rural-to-urban roughness transitions. Smooth-wall turbulent boundary layers are subjected to a generic urban roughness consisting of cubes in an in-line arrangement. Two line sources of pollutant are added to investigate the effect on pollutant dispersion. Firstly, the LES method is validated with data from wind-tunnel experiments on fully-developed flow over cubical roughness. Good agreement is found for the vertical profiles of the mean streamwise velocity component and mean Reynolds stress. Subsequently, roughness transition simulations are done for both neutral and stable conditions. Results are compared with fully-developed simulations with conventional double-periodic boundary conditions. In stable conditions, at the end of the domain the streamwise velocity component has not yet reached the fully-developed state even though the surface forces are nearly constant. Moreover, the internal boundary layer is shallower than in the neutral case. Furthermore, an investigation of the turbulence kinetic energy budget shows that the buoyancy destruction term is reduced in the internal boundary layer, above which it is equal to the undisturbed (smooth wall) value. In addition, in stable conditions pollutants emitted above the urban canopy enter the canopy farther downstream due to decreased vertical mixing. Pollutants emitted below the top of the urban canopy are 85 % higher in concentration in stable conditions mostly due to decreased advection. If this is taken into account concentrations remain 17 % greater in stable conditions due to less rapid internal boundary-layer growth. Finally, it is concluded that in the first seven streets the vertical advective pollutant flux is significant, in contrast to the fully-developed case.

  19. NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.

    PubMed

    Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C

    2011-09-14

    An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics

  20. Improving the Energy Market: Algorithms, Market Implications, and Transmission Switching

    NASA Astrophysics Data System (ADS)

    Lipka, Paula Ann

    This dissertation aims to improve ISO operations through a better real-time market solution algorithm that directly considers both real and reactive power, finds a feasible Alternating Current Optimal Power Flow solution, and allows for solving transmission switching problems in an AC setting. Most of the IEEE systems do not contain any thermal limits on lines, and the ones that do are often not binding. Chapter 3 modifies the thermal limits for the IEEE systems to create new, interesting test cases. Algorithms created to better solve the power flow problem often solve the IEEE cases without line limits. However, one of the factors that makes the power flow problem hard is thermal limits on the lines. The transmission networks in practice often have transmission lines that become congested, and it is unrealistic to ignore line limits. Modifying the IEEE test cases makes it possible for other researchers to be able to test their algorithms on a setup that is closer to the actual ISO setup. This thesis also examines how to convert limits given on apparent power---as is in the case in the Polish test systems---to limits on current. The main consideration in setting line limits is temperature, which linearly relates to current. Setting limits on real or apparent power is actually a proxy for using the limits on current. Therefore, Chapter 3 shows how to convert back to the best physical representation of line limits. A sequential linearization of the current-voltage formulation of the Alternating Current Optimal Power Flow (ACOPF) problem is used to find an AC-feasible generator dispatch. In this sequential linearization, there are parameters that are set to the previous optimal solution. Additionally, to improve accuracy of the Taylor series approximations that are used, the movement of the voltage is restricted. The movement of the voltage is allowed to be very large at the first iteration and is restricted further on each subsequent iteration, with the restriction corresponding to the accuracy and AC-feasiblity of the solution. This linearization was tested on the IEEE and Polish systems, which range from 14 to 3375 buses and 20 to 4161 transmission lines. It had an accuracy of 0.5% or less for all but the 30-bus system. It also solved in linear time with CPLEX, while the non-linear version solved in O(n1.11) to O(n1.39). The sequential linearization is slower than the nonlinear formulation for smaller problems, but faster for larger problems, and its linear computational time means it would continue solving faster for larger problems. A major consideration to implementing algorithms to solve the optimal generator dispatch is ensuring that the resulting prices from the algorithm will support the market. Since the sequential linearization is linear, it is convex, its marginal values are well-defined, and there is no duality gap. The prices and settlements obtained from the sequential linearization therefore can be used to run a market. This market will include extra prices and settlements for reactive power and voltage, compared to the present-day market, which is based on real power. An advantage of this is that there is a very clear pool that can be used for reactive power/voltage support payments, while presently there is not a clear pool to take them out of. This method also reveals how valuable reactive power and voltage are at different locations, which can enable better planning of reactive resource construction. Transmission switching increases the feasible region of the generator dispatch, which means there may be a better solution than without transmission switching. Power flows on transmission lines are not directly controllable; rather, the power flows according to how it is injected and the physical characteristics of the lines. Changing the network topology changes the physical characteristics, which changes the flows. This means that sets of generator dispatch that may have previously been infeasible due to the flow exceeding line constraints may be feasible, since the flows will be different and may meet line constraints. However, transmission switching is a mixed integer problem, which may have a very slow solution time. For economic switching, we examine a series of heuristics. We examine the congestion rent heuristic in detail and then examine many other heuristics at a higher level. Post-contingency corrective switching aims to fix issues in the power network after a line or generator outage. In Chapter 7, we show that using the sequential linear program with corrective switching helps solve voltage and excessive flow issues. (Abstract shortened by UMI.).

  1. Line Segmentation in Handwritten Assamese and Meetei Mayek Script Using Seam Carving Based Algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, Chandan Jyoti; Kalita, Sanjib Kr.

    Line segmentation is a key stage in an Optical Character Recognition system. This paper primarily concerns the problem of text line extraction on color and grayscale manuscript pages of two major North-east Indian regional Scripts, Assamese and Meetei Mayek. Line segmentation of handwritten text in Assamese and Meetei Mayek scripts is an uphill task primarily because of the structural features of both the scripts and varied writing styles. Line segmentation of a document image is been achieved by using the Seam carving technique, in this paper. Researchers from various regions used this approach for content aware resizing of an image. However currently many researchers are implementing Seam Carving for line segmentation phase of OCR. Although it is a language independent technique, mostly experiments are done over Arabic, Greek, German and Chinese scripts. Two types of seams are generated, medial seams approximate the orientation of each text line, and separating seams separated one line of text from another. Experiments are performed extensively over various types of documents and detailed analysis of the evaluations reflects that the algorithm performs well for even documents with multiple scripts. In this paper, we present a comparative study of accuracy of this method over different types of data.

  2. Prediction of novel stable Fe-V-Si ternary phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Manh Cuong; Chen, Chong; Zhao, Xin

    Genetic algorithm searches based on a cluster expansion model are performed to search for stable phases of Fe-V-Si ternary. Here, we identify a new thermodynamically, dynamically and mechanically stable ternary phase of Fe 5V 2Si with 2 formula units in a tetragonal unit cell. The formation energy of this new ternary phase is -36.9 meV/atom below the current ternary convex hull. The magnetic moment of Fe in the new structure varies from -0.30-2.52 μ B depending strongly on the number of Fe nearest neighbors. The total magnetic moment is 10.44 μ B/unit cell for new Fe 5V 2Si structure andmore » the system is ordinarily metallic.« less

  3. Prediction of novel stable Fe-V-Si ternary phase

    DOE PAGES

    Nguyen, Manh Cuong; Chen, Chong; Zhao, Xin; ...

    2018-10-28

    Genetic algorithm searches based on a cluster expansion model are performed to search for stable phases of Fe-V-Si ternary. Here, we identify a new thermodynamically, dynamically and mechanically stable ternary phase of Fe 5V 2Si with 2 formula units in a tetragonal unit cell. The formation energy of this new ternary phase is -36.9 meV/atom below the current ternary convex hull. The magnetic moment of Fe in the new structure varies from -0.30-2.52 μ B depending strongly on the number of Fe nearest neighbors. The total magnetic moment is 10.44 μ B/unit cell for new Fe 5V 2Si structure andmore » the system is ordinarily metallic.« less

  4. A best on-line algorithm for single machine scheduling the equal length jobs with the special chain precedence and delivery time

    NASA Astrophysics Data System (ADS)

    Gu, Cunchang; Mu, Yundong

    2013-03-01

    In this paper, we consider a single machine on-line scheduling problem with the special chains precedence and delivery time. All jobs arrive over time. The chains chainsi arrive at time ri , it is known that the processing and delivery time of each job on the chain satisfy one special condition CD a forehand: if the job J(i)j is the predecessor of the job J(i)k on the chain chaini, then they satisfy p(i)j = p(i)k = p >= qj >= qk , i = 1,2, ---,n , where pj and qj denote the processing time and the delivery time of the job Jj respectively. Obviously, if the arrival jobs have no chains precedence, it shows that the length of the corresponding chain is 1. The objective is to minimize the time by which all jobs have been delivered. We provide an on-line algorithm with a competitive ratio of √2 , and the result is the best possible.

  5. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.

    PubMed

    Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long

    2017-10-03

    For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.

  6. Sensitivity of bandpass filters using recirculating delay-line structures

    NASA Astrophysics Data System (ADS)

    Heyde, Eric C.

    1996-12-01

    Recirculating delay lines have value notably as sensors and optical signal processors. Most useful applications depend on a high-finesse response from a network. A proof that, with given response parameters, more complex systems can produce behavior that is more stable to the effects of nonidealities than a single recirculating loop is presented.

  7. Low Contribution of PbO2-Coated Lead Service Lines to Water Lead Contamination at the Tap

    EPA Science Inventory

    To determine if field experience corroborates that formation of stable PbO2 coatings on lead service lines (LSLs) provides an effective lead contamination control strategy, lead profile sampling was undertaken at eight home kitchen taps in three US cities (Newport, Rhode Island; ...

  8. The beta Pictoris circumstellar disk. XXIV. Clues to the origin of the stable gas

    NASA Astrophysics Data System (ADS)

    Lagrange, A.-M.; Beust, H.; Mouillet, D.; Deleuil, M.; Feldman, P. D.; Ferlet, R.; Hobbs, L.; Lecavelier Des Etangs, A.; Lissauer, J. J.; McGrath, M. A.; McPhate, J. B.; Spyromilio, J.; Tobin, W.; Vidal-Madjar, A.

    1998-02-01

    GHRS high resolution spectra of {beta \\:Pictoris} were obtained to study the stable gas around this star. Several elements are detected and their abundances measured. Upper limits to the abundances of others are also measured. The data permit improved chemical analysis of the stable gas around {beta \\:Pictoris}, and yield new and more accurate estimates of the radiation pressure acting on various elements. We first analyze the data in the framework of a closed-box model. The electron density is evaluated (Neion {S}imeq10(6) cm(-3) ), which in turn implies constraints on the ionization stages of the various elements. The refractory elements in the stable gas have then standard abundances. In contrast, in this model, the lighter elements sulfur and carbon, observed in their neutral form, seem to be depleted. However several arguments, especially the strong radiation pressure, argue against a closed-box hypothesis. We therefore develop hydrodynamical simulations, taking into account the radiation pressure, to reproduce the stable features under three different hypotheses for the origin of the stable gas: stellar ejection, comet evaporation and grain evaporation. They show that a permanent production of gas is needed in order to sustain a stable absorption. In order to reproduce the observed zero velocity of the absorption features a mechanism is also needed to slow down the radial flow of matter. We show that this could be achieved by a colliding ring of neutral hydrogen farther than 0.5AU from the star. Applied to the Fe Ii\\ lines, the simulations constrain the temperature (Tion {S}imeq1500-2000K) and the velocity dispersion (ion {S}imeq2kms(-1) ) in the gaseous medium. When applied to Ca Ii\\ and to other UV lines, they test the chemical composition of the parent source of gas, which is found to have standard abundances in refractory elements. The gas production rate is ion {S}imeq 10(-16}M_{sun) yr(-1) . This description is the first consistent explanation for these long-lived stable absorptions observed for a large number of lines arising from a variety of energy levels in different chemical elements. It raises the question of the origin of the parent material, together with its composition and dynamics. This realizes a link between this gaseous component and the whole circumstellar system. Based on observations collected with the Hubble Space Telescope

  9. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Lin, Lin; Shao, Meiyue

    We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less

  10. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE PAGES

    Brabec, Jiri; Lin, Lin; Shao, Meiyue; ...

    2015-10-06

    We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less

  11. GENERATION OF TWO STABLE CELL LINES THAT EXPRESS HER-ALPHA OR HER-ALPHA AND -BETA AND FIREFLY LUCIFERASE GENES FOR ENDOCRINE SCREENING

    EPA Science Inventory

    Generation of Two Stable Cell Lines that Express hERa or
    hERa and b and Firefly Luciferase Genes for Endocrine Screening

    K.L. Bobseine*1, W.R. Kelce2, P.C. Hartig*1, and L.E. Gray, Jr.1

    1USEPA, NHEERL, Reproductive Toxicology Division, RTP, NC, 2Searle, Reprod...

  12. Bilevel thresholding of sliced image of sludge floc.

    PubMed

    Chu, C P; Lee, D J

    2004-02-15

    This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.

  13. An algorithm for the numerical solution of linear differential games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polovinkin, E S; Ivanov, G E; Balashov, M V

    2001-10-31

    A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented andmore » estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.« less

  14. A multi-objective genetic algorithm for a mixed-model assembly U-line balancing type-I problem considering human-related issues, training, and learning

    NASA Astrophysics Data System (ADS)

    Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed

    2016-12-01

    Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.

  15. Optimized Sleeping Beauty transposons rapidly generate stable transgenic cell lines.

    PubMed

    Kowarz, Eric; Löscher, Denise; Marschalek, Rolf

    2015-04-01

    Stable gene expression in mammalian cells is a prerequisite for many in vitro and in vivo experiments. However, either the integration of plasmids into mammalian genomes or the use of retro-/lentiviral systems have intrinsic limitations. The use of transposable elements, e.g. the Sleeping Beauty system (SB), circumvents most of these drawbacks (integration sites, size limitations) and allows the quick generation of stable cell lines. The integration process of SB is catalyzed by a transposase and the handling of this gene transfer system is easy, fast and safe. Here, we report our improvements made to the existing SB vector system and present two new vector types for robust constitutive or inducible expression of any gene of interest. Both types are available in 16 variants with different selection marker (puromycin, hygromycin, blasticidin, neomycin) and fluorescent protein expression (GFP, RFP, BFP) to fit most experimental requirements. With this system it is possible to generate cell lines from stable transfected cells quickly and reliably in a medium-throughput setting (three to five days). Cell lines robustly express any gene-of-interest, either constitutively or tightly regulated by doxycycline. This allows many laboratory experiments to speed up generation of data in a rapid and robust manner. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Axial 3D region of interest reconstruction using weighted cone beam BPF/DBPF algorithm cascaded with adequately oriented orthogonal butterfly filtering

    NASA Astrophysics Data System (ADS)

    Tang, Shaojie; Tang, Xiangyang

    2016-03-01

    Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).

  17. Validating clustering of molecular dynamics simulations using polymer models.

    PubMed

    Phillips, Joshua L; Colvin, Michael E; Newsam, Shawn

    2011-11-14

    Molecular dynamics (MD) simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our knowledge, our framework is the first to utilize model polymers to rigorously test the utility of clustering algorithms for studying biopolymers.

  18. Validating clustering of molecular dynamics simulations using polymer models

    PubMed Central

    2011-01-01

    Background Molecular dynamics (MD) simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. Results We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. Conclusions We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our knowledge, our framework is the first to utilize model polymers to rigorously test the utility of clustering algorithms for studying biopolymers. PMID:22082218

  19. ChIP-PaM: an algorithm to identify protein-DNA interaction using ChIP-Seq data.

    PubMed

    Wu, Song; Wang, Jianmin; Zhao, Wei; Pounds, Stanley; Cheng, Cheng

    2010-06-03

    ChIP-Seq is a powerful tool for identifying the interaction between genomic regulators and their bound DNAs, especially for locating transcription factor binding sites. However, high cost and high rate of false discovery of transcription factor binding sites identified from ChIP-Seq data significantly limit its application. Here we report a new algorithm, ChIP-PaM, for identifying transcription factor target regions in ChIP-Seq datasets. This algorithm makes full use of a protein-DNA binding pattern by capitalizing on three lines of evidence: 1) the tag count modelling at the peak position, 2) pattern matching of a specific tag count distribution, and 3) motif searching along the genome. A novel data-based two-step eFDR procedure is proposed to integrate the three lines of evidence to determine significantly enriched regions. Our algorithm requires no technical controls and efficiently discriminates falsely enriched regions from regions enriched by true transcription factor (TF) binding on the basis of ChIP-Seq data only. An analysis of real genomic data is presented to demonstrate our method. In a comparison with other existing methods, we found that our algorithm provides more accurate binding site discovery while maintaining comparable statistical power.

  20. Evaluation of an on-line methodology for measuring volatile organic compounds (VOC) fluxes by eddy-covariance with a PTR-TOF-Qi-MS

    NASA Astrophysics Data System (ADS)

    Loubet, Benjamin; Buysse, Pauline; Lafouge, Florence; Ciuraru, Raluca; Decuq, Céline; Zurfluh, Olivier

    2017-04-01

    Field scale flux measurements of volatile organic compounds (VOC) are essential for improving our knowledge of VOC emissions from ecosystems. Many VOCs are emitted from and deposited to ecosystems. Especially less known, are crops which represent more than 50% of French terrestrial surfaces. In this study, we evaluate a new on-line methodology for measuring VOC fluxes by Eddy Covariance with a PTR-Qi-TOF-MS. Measurements were performed at the ICOS FR-GRI site over a crop using a 30 m long high flow rate sampling line and an ultrasonic anemometer. A Labview program was specially designed for acquisition and on-line covariance calculation: Whole mass spectra ( 240000 channels) were acquired on-line at 10 Hz and stored in a temporary memory. Every 5 minutes, the spectra were mass-calibrated and normalized by the primary ion peak integral at 10 Hz. The mass spectra peaks were then retrieved from the 5-min averaged spectra by withdrawing the baseline, determining the resolution and using a multiple-peak detection algorithm. In order to optimize the peak detection algorithm for the covariance, we determined the covariances as the integrals of the peaks of the vertical-air-velocity-fluctuation weighed-averaged-spectra. In other terms, we calculate , were w is the vertical component of the air velocity, Sp is the spectra, t is time, lag is the decorrelation lag time and <.> denotes an average. The lag time was determined as the decorrelation time between w and the primary ion (at mass 21.022) which integrates the contribution of all reactions of VOC and water with the primary ion. Our algorithm was evaluated by comparing the exchange velocity of water vapor measured by an open path absorption spectroscopy instrument and the water cluster measured with the PTRQi-TOF-MS. The influence of the algorithm parameters and lag determination is discussed. This study was supported by the ADEME-CORTEA COV3ER project (http://www6.inra.fr/cov3er).

  1. Using Machine Learning for Advanced Anomaly Detection and Classification

    NASA Astrophysics Data System (ADS)

    Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.

    2016-09-01

    Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.

  2. Automatic image analysis and spot classification for detection of pathogenic Escherichia coli on glass slide DNA microarrays

    USDA-ARS?s Scientific Manuscript database

    A computer algorithm was created to inspect scanned images from DNA microarray slides developed to rapidly detect and genotype E. Coli O157 virulent strains. The algorithm computes centroid locations for signal and background pixels in RGB space and defines a plane perpendicular to the line connect...

  3. An accurate algorithm to calculate the Hurst exponent of self-similar processes

    NASA Astrophysics Data System (ADS)

    Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.; Román-Sánchez, I. M.

    2014-06-01

    In this paper, we introduce a new approach which generalizes the GM2 algorithm (introduced in Sánchez-Granero et al. (2008) [52]) as well as fractal dimension algorithms (FD1, FD2 and FD3) (first appeared in Sánchez-Granero et al. (2012) [51]), providing an accurate algorithm to calculate the Hurst exponent of self-similar processes. We prove that this algorithm performs properly in the case of short time series when fractional Brownian motions and Lévy stable motions are considered. We conclude the paper with a dynamic study of the Hurst exponent evolution in the S&P500 index stocks.

  4. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching

    PubMed Central

    Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Zhang, Peng

    2017-01-01

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. PMID:28885547

  5. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.

    PubMed

    Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng

    2017-09-08

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.

  6. Determination of inorganic arsenic in algae using bromine halogenation and on-line nonpolar solid phase extraction followed by hydride generation atomic flourescence spectrometry

    USDA-ARS?s Scientific Manuscript database

    Accurate, stable and fast analysis of toxic inorganic arsenic (iAs) in complicated and arsenosugar-rich algae matrix is always a challenge. Herein, a novel analytical method for iAs in algae was reported, using bromine halogenation and on-line nonpolar solid phase extraction (SPE) followed by hydrid...

  7. Accurate wavelength measurements of a putative standard for near-infrared diffuse reflection spectrometry.

    PubMed

    Isaksson, Tomas; Yang, Husheng; Kemeny, Gabor J; Jackson, Richard S; Wang, Qian; Alam, M Kathleen; Griffiths, Peter R

    2003-02-01

    The diffuse reflection (DR) spectrum of a sample consisting of a mixture of rare earth oxides and talc was measured at 2 cm-1 resolution, using five different accessories installed on five different Fourier transform near-infrared (FT-NIR) spectrometers from four manufacturers. Peak positions for 37 peaks were determined using two peak-picking algorithms: center-of-mass and polynomial fitting. The wavenumber of the band center reported by either of these techniques was sensitive to the slope of the baseline, and so the baseline of the spectra was corrected using either a polynomial fit or conversion to the second derivative. Significantly different results were obtained with one combination of spectrometer and accessory than the others. Apparently, the beam path through the interferometer and DR accessory was different for this accessory than for any of the other measurements, causing a severe degradation of the resolution. Spectra measured on this instrument were removed as outliers. For measurements made on FT-NIR spectrometers, it is shown that it is important to check the resolution at which the spectrum has been measured using lines in the vibration-rotation spectrum of atmospheric water vapor and to specify the peak-picking and baseline-correction algorithms that are used to process the measured spectra. The variance between the results given by the four different methods of peak-picking and baseline correction was substantially larger than the variance between the remaining five measurements. Certain bands were found to be more suitable than others for use as wavelength standards. A band at 5943.13 cm-1 (1682.62 nm) was found to be the most stable band between the four methods and the six measurements. A band at 5177.04 cm-1 (1931.61 nm) has the highest precision between different measurements when polynomial baseline correction and polynomial peak-picking algorithms are used.

  8. An enhanced deterministic K-Means clustering algorithm for cancer subtype prediction from gene expression data.

    PubMed

    Nidheesh, N; Abdul Nazeer, K A; Ameer, P M

    2017-12-01

    Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. An Optimal CDS Construction Algorithm with Activity Scheduling in Ad Hoc Networks

    PubMed Central

    Penumalli, Chakradhar; Palanichamy, Yogesh

    2015-01-01

    A new energy efficient optimal Connected Dominating Set (CDS) algorithm with activity scheduling for mobile ad hoc networks (MANETs) is proposed. This algorithm achieves energy efficiency by minimizing the Broadcast Storm Problem [BSP] and at the same time considering the node's remaining energy. The Connected Dominating Set is widely used as a virtual backbone or spine in mobile ad hoc networks [MANETs] or Wireless Sensor Networks [WSN]. The CDS of a graph representing a network has a significant impact on an efficient design of routing protocol in wireless networks. Here the CDS is a distributed algorithm with activity scheduling based on unit disk graph [UDG]. The node's mobility and residual energy (RE) are considered as parameters in the construction of stable optimal energy efficient CDS. The performance is evaluated at various node densities, various transmission ranges, and mobility rates. The theoretical analysis and simulation results of this algorithm are also presented which yield better results. PMID:26221627

  10. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm.

    PubMed

    Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A; Ravankar, Abhijeet

    2018-04-23

    In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  11. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm

    PubMed Central

    Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A.; Ravankar, Abhijeet

    2018-01-01

    In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision. PMID:29690624

  12. Productive Information Foraging

    NASA Technical Reports Server (NTRS)

    Furlong, P. Michael; Dille, Michael

    2016-01-01

    This paper presents a new algorithm for autonomous on-line exploration in unknown environments. The objective of the algorithm is to free robot scientists from extensive preliminary site investigation while still being able to collect meaningful data. We simulate a common form of exploration task for an autonomous robot involving sampling the environment at various locations and compare performance with a simpler existing algorithm that is also denied global information. The result of the experiment shows that the new algorithm has a statistically significant improvement in performance with a significant effect size for a range of costs for taking sampling actions.

  13. Flattening maps for the visualization of multibranched vessels.

    PubMed

    Zhu, Lei; Haker, Steven; Tannenbaum, Allen

    2005-02-01

    In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided.

  14. Flattening Maps for the Visualization of Multibranched Vessels

    PubMed Central

    Zhu, Lei; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    In this paper, we present two novel algorithms which produce flattened visualizations of branched physiological surfaces, such as vessels. The first approach is a conformal mapping algorithm based on the minimization of two Dirichlet functionals. From a triangulated representation of vessel surfaces, we show how the algorithm can be implemented using a finite element technique. The second method is an algorithm which adjusts the conformal mapping to produce a flattened representation of the original surface while preserving areas. This approach employs the theory of optimal mass transport. Furthermore, a new way of extracting center lines for vessel fly-throughs is provided. PMID:15707245

  15. Cell-veto Monte Carlo algorithm for long-range systems.

    PubMed

    Kapfer, Sebastian C; Krauth, Werner

    2016-09-01

    We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.

  16. Detection of suspicious pain regions on a digital infrared thermal image using the multimodal function optimization.

    PubMed

    Lee, Junghoon; Lee, Joosung; Song, Sangha; Lee, Hyunsook; Lee, Kyoungjoung; Yoon, Youngro

    2008-01-01

    Automatic detection of suspicious pain regions is very useful in the medical digital infrared thermal imaging research area. To detect those regions, we use the SOFES (Survival Of the Fitness kind of the Evolution Strategy) algorithm which is one of the multimodal function optimization methods. We apply this algorithm to famous diseases, such as a foot of the glycosuria, the degenerative arthritis and the varicose vein. The SOFES algorithm is available to detect some hot spots or warm lines as veins. And according to a hundred of trials, the algorithm is very fast to converge.

  17. Off-line data reduction

    NASA Astrophysics Data System (ADS)

    Gutowski, Marek W.

    1992-12-01

    Presented is a novel, heuristic algorithm, based on fuzzy set theory, allowing for significant off-line data reduction. Given the equidistant data, the algorithm discards some points while retaining others with their original values. The fraction of original data points retained is typically {1}/{6} of the initial value. The reduced data set preserves all the essential features of the input curve. It is possible to reconstruct the original information to high degree of precision by means of natural cubic splines, rational cubic splines or even linear interpolation. Main fields of application should be non-linear data fitting (substantial savings in CPU time) and graphics (storage space savings).

  18. Airborne LIDAR point cloud tower inclination judgment

    NASA Astrophysics Data System (ADS)

    liang, Chen; zhengjun, Liu; jianguo, Qian

    2016-11-01

    Inclined transmission line towers for the safe operation of the line caused a great threat, how to effectively, quickly and accurately perform inclined judgment tower of power supply company safety and security of supply has played a key role. In recent years, with the development of unmanned aerial vehicles, unmanned aerial vehicles equipped with a laser scanner, GPS, inertial navigation is one of the high-precision 3D Remote Sensing System in the electricity sector more and more. By airborne radar scan point cloud to visually show the whole picture of the three-dimensional spatial information of the power line corridors, such as the line facilities and equipment, terrain and trees. Currently, LIDAR point cloud research in the field has not yet formed an algorithm to determine tower inclination, the paper through the existing power line corridor on the tower base extraction, through their own tower shape characteristic analysis, a vertical stratification the method of combining convex hull algorithm for point cloud tower scarce two cases using two different methods for the tower was Inclined to judge, and the results with high reliability.

  19. On-line experimental validation of a model-based diagnostic algorithm dedicated to a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas

    2016-02-01

    In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.

  20. On Gamma Ray Instrument On-Board Data Processing Real-Time Computational Algorithm for Cosmic Ray Rejection

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Hunter, Stanley D.; Hanu, Andrei R.; Sheets, Teresa B.

    2016-01-01

    Richard O. Duda and Peter E. Hart of Stanford Research Institute in [1] described the recurring problem in computer image processing as the detection of straight lines in digitized images. The problem is to detect the presence of groups of collinear or almost collinear figure points. It is clear that the problem can be solved to any desired degree of accuracy by testing the lines formed by all pairs of points. However, the computation required for n=NxM points image is approximately proportional to n2 or O(n2), becoming prohibitive for large images or when data processing cadence time is in milliseconds. Rosenfeld in [2] described an ingenious method due to Hough [3] for replacing the original problem of finding collinear points by a mathematically equivalent problem of finding concurrent lines. This method involves transforming each of the figure points into a straight line in a parameter space. Hough chose to use the familiar slope-intercept parameters, and thus his parameter space was the two-dimensional slope-intercept plane. A parallel Hough transform running on multi-core processors was elaborated in [4]. There are many other proposed methods of solving a similar problem, such as sampling-up-the-ramp algorithm (SUTR) [5] and algorithms involving artificial swarm intelligence techniques [6]. However, all state-of-the-art algorithms lack in real time performance. Namely, they are slow for large images that require performance cadence of a few dozens of milliseconds (50ms). This problem arises in spaceflight applications such as near real-time analysis of gamma ray measurements contaminated by overwhelming amount of traces of cosmic rays (CR). Future spaceflight instruments such as the Advanced Energetic Pair Telescope instrument (AdEPT) [7-9] for cosmos gamma ray survey employ large detector readout planes registering multitudes of cosmic ray interference events and sparse science gamma ray event traces' projections. The AdEPT science of interest is in the gamma ray events and the problem is to detect and reject the much more voluminous cosmic ray projections, so that the remaining science data can be telemetered to the ground over the constrained communication link. The state-of-the-art in cosmic rays detection and rejection does not provide an adequate computational solution. This paper presents a novel approach to the AdEPT on-board data processing burdened with the CR detection top pole bottleneck problem. This paper is introducing the data processing object, demonstrates object segmentation and distribution for processing among many processing elements (PEs) and presents solution algorithm for the processing bottleneck - the CR-Algorithm. The algorithm is based on the a priori knowledge that a CR pierces the entire instrument pressure vessel. This phenomenon is also the basis for a straightforward CR simulator, allowing the CR-Algorithm performance testing. Parallel processing of the readout image's (2(N+M) - 4) peripheral voxels is detecting all CRs, resulting in O(n) computational complexity. This algorithm near real-time performance is making AdEPT class spaceflight instruments feasible.

Top