Sample records for pattern differentiation algorithm

  1. Genes@Work: an efficient algorithm for pattern discovery and multivariate feature selection in gene expression data.

    PubMed

    Lepre, Jorge; Rice, J Jeremy; Tu, Yuhai; Stolovitzky, Gustavo

    2004-05-01

    Despite the growing literature devoted to finding differentially expressed genes in assays probing different tissues types, little attention has been paid to the combinatorial nature of feature selection inherent to large, high-dimensional gene expression datasets. New flexible data analysis approaches capable of searching relevant subgroups of genes and experiments are needed to understand multivariate associations of gene expression patterns with observed phenotypes. We present in detail a deterministic algorithm to discover patterns of multivariate gene associations in gene expression data. The patterns discovered are differential with respect to a control dataset. The algorithm is exhaustive and efficient, reporting all existent patterns that fit a given input parameter set while avoiding enumeration of the entire pattern space. The value of the pattern discovery approach is demonstrated by finding a set of genes that differentiate between two types of lymphoma. Moreover, these genes are found to behave consistently in an independent dataset produced in a different laboratory using different arrays, thus validating the genes selected using our algorithm. We show that the genes deemed significant in terms of their multivariate statistics will be missed using other methods. Our set of pattern discovery algorithms including a user interface is distributed as a package called Genes@Work. This package is freely available to non-commercial users and can be downloaded from our website (http://www.research.ibm.com/FunGen).

  2. Usefulness of magnifying endoscopy with narrow-band imaging for diagnosis of depressed gastric lesions

    PubMed Central

    SUMIE, HIROAKI; SUMIE, SHUJI; NAKAHARA, KEITA; WATANABE, YASUTOMO; MATSUO, KEN; MUKASA, MICHITA; SAKAI, TAKESHI; YOSHIDA, HIKARU; TSURUTA, OSAMU; SATA, MICHIO

    2014-01-01

    The usefulness of magnifying endoscopy with narrow-band imaging (ME-NBI) for the diagnosis of early gastric cancer is well known, however, there are no evaluation criteria. The aim of this study was to devise and evaluate a novel diagnostic algorithm for ME-NBI in depressed early gastric cancer. Between August, 2007 and May, 2011, 90 patients with a total of 110 depressed gastric lesions were enrolled in the study. A diagnostic algorithm was devised based on ME-NBI microvascular findings: microvascular irregularity and abnormal microvascular patterns (fine network, corkscrew and unclassified patterns). The diagnostic efficiency of the algorithm for gastric cancer and histological grade was assessed by measuring its mean sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy. Furthermore, inter- and intra-observer variation were measured. In the differential diagnosis of gastric cancer from non-cancerous lesions, the mean sensitivity, specificity, PPV, NPV, and accuracy of the diagnostic algorithm were 86.7, 48.0, 94.4, 26.7, and 83.2%, respectively. Furthermore, in the differential diagnosis of undifferentiated adenocarcinoma from differentiated adenocarcinoma, the mean sensitivity, specificity, PPV, NPV, and accuracy of the diagnostic algorithm were 61.6, 86.3, 69.0, 84.8, and 79.1%, respectively. For the ME-NBI final diagnosis using this algorithm, the mean κ values for inter- and intra-observer agreement were 0.50 and 0.77, respectively. In conclusion, the diagnostic algorithm based on ME-NBI microvascular findings was convenient and had high diagnostic accuracy, reliability and reproducibility in the differential diagnosis of depressed gastric lesions. PMID:24649321

  3. Unsteady Solution of Non-Linear Differential Equations Using Walsh Function Series

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2015-01-01

    Walsh functions form an orthonormal basis set consisting of square waves. The discontinuous nature of square waves make the system well suited for representing functions with discontinuities. The product of any two Walsh functions is another Walsh function - a feature that can radically change an algorithm for solving non-linear partial differential equations (PDEs). The solution algorithm of non-linear differential equations using Walsh function series is unique in that integrals and derivatives may be computed using simple matrix multiplication of series representations of functions. Solutions to PDEs are derived as functions of wave component amplitude. Three sample problems are presented to illustrate the Walsh function series approach to solving unsteady PDEs. These include an advection equation, a Burgers equation, and a Riemann problem. The sample problems demonstrate the use of the Walsh function solution algorithms, exploiting Fast Walsh Transforms in multi-dimensions (O(Nlog(N))). Details of a Fast Walsh Reciprocal, defined here for the first time, enable inversion of aWalsh Symmetric Matrix in O(Nlog(N)) operations. Walsh functions have been derived using a fractal recursion algorithm and these fractal patterns are observed in the progression of pairs of wave number amplitudes in the solutions. These patterns are most easily observed in a remapping defined as a fractal fingerprint (FFP). A prolongation of existing solutions to the next highest order exploits these patterns. The algorithms presented here are considered a work in progress that provide new alternatives and new insights into the solution of non-linear PDEs.

  4. Performance Review of Harmony Search, Differential Evolution and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Mohan Pandey, Hari

    2017-08-01

    Metaheuristic algorithms are effective in the design of an intelligent system. These algorithms are widely applied to solve complex optimization problems, including image processing, big data analytics, language processing, pattern recognition and others. This paper presents a performance comparison of three meta-heuristic algorithms, namely Harmony Search, Differential Evolution, and Particle Swarm Optimization. These algorithms are originated altogether from different fields of meta-heuristics yet share a common objective. The standard benchmark functions are used for the simulation. Statistical tests are conducted to derive a conclusion on the performance. The key motivation to conduct this research is to categorize the computational capabilities, which might be useful to the researchers.

  5. Towards developing robust algorithms for solving partial differential equations on MIMD machines

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Naik, Vijay K.

    1988-01-01

    Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.

  6. Towards developing robust algorithms for solving partial differential equations on MIMD machines

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Naik, V. K.

    1985-01-01

    Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.

  7. A multi-populations multi-strategies differential evolution algorithm for structural optimization of metal nanoclusters

    NASA Astrophysics Data System (ADS)

    Fan, Tian-E.; Shao, Gui-Fang; Ji, Qing-Shuang; Zheng, Ji-Wen; Liu, Tun-dong; Wen, Yu-Hua

    2016-11-01

    Theoretically, the determination of the structure of a cluster is to search the global minimum on its potential energy surface. The global minimization problem is often nondeterministic-polynomial-time (NP) hard and the number of local minima grows exponentially with the cluster size. In this article, a multi-populations multi-strategies differential evolution algorithm has been proposed to search the globally stable structure of Fe and Cr nanoclusters. The algorithm combines a multi-populations differential evolution with an elite pool scheme to keep the diversity of the solutions and avoid prematurely trapping into local optima. Moreover, multi-strategies such as growing method in initialization and three differential strategies in mutation are introduced to improve the convergence speed and lower the computational cost. The accuracy and effectiveness of our algorithm have been verified by comparing the results of Fe clusters with Cambridge Cluster Database. Meanwhile, the performance of our algorithm has been analyzed by comparing the convergence rate and energy evaluations with the classical DE algorithm. The multi-populations, multi-strategies mutation and growing method in initialization in our algorithm have been considered respectively. Furthermore, the structural growth pattern of Cr clusters has been predicted by this algorithm. The results show that the lowest-energy structure of Cr clusters contains many icosahedra, and the number of the icosahedral rings rises with increasing size.

  8. Computing sparse derivatives and consecutive zeros problem

    NASA Astrophysics Data System (ADS)

    Chandra, B. V. Ravi; Hossain, Shahadat

    2013-02-01

    We describe a substitution based sparse Jacobian matrix determination method using algorithmic differentiation. Utilizing the a priori known sparsity pattern, a compression scheme is determined using graph coloring. The "compressed pattern" of the Jacobian matrix is then reordered into a form suitable for computation by substitution. We show that the column reordering of the compressed pattern matrix (so as to align the zero entries into consecutive locations in each row) can be viewed as a variant of traveling salesman problem. Preliminary computational results show that on the test problems the performance of nearest-neighbor type heuristic algorithms is highly encouraging.

  9. Dermoscopy of pigmented lesions on mucocutaneous junction and mucous membrane.

    PubMed

    Lin, J; Koga, H; Takata, M; Saida, T

    2009-12-01

    The dermoscopic features of pigmented lesions on the mucocutaneous junction and mucous membrane are different from those on hairy skin. Differentiation between benign lesions and malignant melanomas of these sites is often difficult. To define the dermoscopic patterns of lesions on the mucocutaneous junction and mucous membrane, and assess the applicability of standard dermoscopic algorithms to these lesions. An unselected consecutive series of 40 lesions on the mucocutaneous junction and mucous membrane was studied. All the lesions were imaged using dermoscopy devices, analysed for dermoscopic patterns and scored with algorithms including the ABCD rule, Menzies method, 7-point checklist, 3-point checklist and the CASH algorithm. Benign pigmented lesions of the mucocutaneous junction and mucous membrane frequently presented a dotted-globular pattern (25%), a homogeneous pattern (25%), a fish scale-like pattern (18.8%) and a hyphal pattern (18.8%), while melanomas of these sites showed a multicomponent pattern (75%) and a homogeneous pattern (25%). The fish scale-like pattern and hyphal pattern were considered to be variants of the ring-like pattern. The sensitivities of the ABCD rule, Menzies method, 7-point checklist, 3-point checklist and CASH algorithm in diagnosing mucosal melanomas were 100%, 100%, 63%, 88% and 100%; and the specificities were 100%, 94%, 100%, 94% and 100%, respectively. The ring-like pattern and its variants (fish scale-like pattern and hyphal pattern) are frequently observed as well as the dotted-globular pattern and homogeneous pattern in mucosal melanotic macules. The algorithms for pigmented lesions on hairy skin also apply to those on the mucocutaneous junction and mucous membrane with high sensitivity and specificity.

  10. Discovering causal signaling pathways through gene-expression patterns

    PubMed Central

    Parikh, Jignesh R.; Klinger, Bertram; Xia, Yu; Marto, Jarrod A.; Blüthgen, Nils

    2010-01-01

    High-throughput gene-expression studies result in lists of differentially expressed genes. Most current meta-analyses of these gene lists include searching for significant membership of the translated proteins in various signaling pathways. However, such membership enrichment algorithms do not provide insight into which pathways caused the genes to be differentially expressed in the first place. Here, we present an intuitive approach for discovering upstream signaling pathways responsible for regulating these differentially expressed genes. We identify consistently regulated signature genes specific for signal transduction pathways from a panel of single-pathway perturbation experiments. An algorithm that detects overrepresentation of these signature genes in a gene group of interest is used to infer the signaling pathway responsible for regulation. We expose our novel resource and algorithm through a web server called SPEED: Signaling Pathway Enrichment using Experimental Data sets. SPEED can be freely accessed at http://speed.sys-bio.net/. PMID:20494976

  11. Bio-Inspired Microsystem for Robust Genetic Assay Recognition

    PubMed Central

    Lue, Jaw-Chyng; Fang, Wai-Chi

    2008-01-01

    A compact integrated system-on-chip (SoC) architecture solution for robust, real-time, and on-site genetic analysis has been proposed. This microsystem solution is noise-tolerable and suitable for analyzing the weak fluorescence patterns from a PCR prepared dual-labeled DNA microchip assay. In the architecture, a preceding VLSI differential logarithm microchip is designed for effectively computing the logarithm of the normalized input fluorescence signals. A posterior VLSI artificial neural network (ANN) processor chip is used for analyzing the processed signals from the differential logarithm stage. A single-channel logarithmic circuit was fabricated and characterized. A prototype ANN chip with unsupervised winner-take-all (WTA) function was designed, fabricated, and tested. An ANN learning algorithm using a novel sigmoid-logarithmic transfer function based on the supervised backpropagation (BP) algorithm is proposed for robustly recognizing low-intensity patterns. Our results show that the trained new ANN can recognize low-fluorescence patterns better than an ANN using the conventional sigmoid function. PMID:18566679

  12. [Characteristics of auto-CPAP devices during the simulation of sleep-related breathing flow patterns].

    PubMed

    Rühle, K H; Karweina, D; Domanski, U; Nilius, G

    2009-07-01

    The function of automatic CPAP devices is difficult to investigate using clinical examinations due to the high variability of breathing disorders. With a flow generator, however, identical breathing patterns can be reproduced so that comparative studies on the behaviour of pressure of APAP devices are possible. Because the algorithms of APAP devices based on the experience of users can be modified without much effort, also previously investigated devices should regularly be reviewed with regard to programme changes. Had changes occurred in the algorithms of 3 selected devices--compared to the previously published benchmark studies? Do the current versions of these investigated devices differentiate between open and closed apnoeas? With a self-developed respiratory pump, sleep-related breathing patterns and, with the help of a computerised valve, resistances of the upper respiratory tract were simulated. Three different auto-CPAP devices were subjected to a bench test with and without feedback (open/closed loop). Open loop: the 3 devices showed marked differences in the rate of pressure rise but did not differ from the earlier published results. From an initial pressure of 4 mbar the pressure increased to 10 mbar after a different number of apnoeas (1-6 repetitive apnoeas). Only one device differentiated between closed and open apnoeas. Closed loop: due to the pressure increase, the flow generator simulated reduced obstruction of the upper airways (apnoeas changed to hypopnoeas, hypopnoeas changed to flattening) but different patterns of pressure regulation could still be observed. By applying bench-testing, the algorithms of auto-CPAP devices can regularly be reviewed to detect changes in the software. The differentiation between open and closed apnoeas should be improved in several APAP devices.

  13. Genetic Networks and Anticipation of Gene Expression Patterns

    NASA Astrophysics Data System (ADS)

    Gebert, J.; Lätsch, M.; Pickl, S. W.; Radde, N.; Weber, G.-W.; Wünschiers, R.

    2004-08-01

    An interesting problem for computational biology is the analysis of time-series expression data. Here, the application of modern methods from dynamical systems, optimization theory, numerical algorithms and the utilization of implicit discrete information lead to a deeper understanding. In [1], we suggested to represent the behavior of time-series gene expression patterns by a system of ordinary differential equations, which we analytically and algorithmically investigated under the parametrical aspect of stability or instability. Our algorithm strongly exploited combinatorial information. In this paper, we deepen, extend and exemplify this study from the viewpoint of underlying mathematical modelling. This modelling consists in evaluating DNA-microarray measurements as the basis of anticipatory prediction, in the choice of a smooth model given by differential equations, in an approach of the right-hand side with parametric matrices, and in a discrete approximation which is a least squares optimization problem. We give a mathematical and biological discussion, and pay attention to the special case of a linear system, where the matrices do not depend on the state of expressions. Here, we present first numerical examples.

  14. Parareal algorithms with local time-integrators for time fractional differential equations

    NASA Astrophysics Data System (ADS)

    Wu, Shu-Lin; Zhou, Tao

    2018-04-01

    It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.

  15. Collaborative mining and transfer learning for relational data

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Eslami, Mohammed

    2015-06-01

    Many of the real-world problems, - including human knowledge, communication, biological, and cyber network analysis, - deal with data entities for which the essential information is contained in the relations among those entities. Such data must be modeled and analyzed as graphs, with attributes on both objects and relations encode and differentiate their semantics. Traditional data mining algorithms were originally designed for analyzing discrete objects for which a set of features can be defined, and thus cannot be easily adapted to deal with graph data. This gave rise to the relational data mining field of research, of which graph pattern learning is a key sub-domain [11]. In this paper, we describe a model for learning graph patterns in collaborative distributed manner. Distributed pattern learning is challenging due to dependencies between the nodes and relations in the graph, and variability across graph instances. We present three algorithms that trade-off benefits of parallelization and data aggregation, compare their performance to centralized graph learning, and discuss individual benefits and weaknesses of each model. Presented algorithms are designed for linear speedup in distributed computing environments, and learn graph patterns that are both closer to ground truth and provide higher detection rates than centralized mining algorithm.

  16. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  17. Ultrasound speckle reduction based on fractional order differentiation.

    PubMed

    Shao, Dangguo; Zhou, Ting; Liu, Fan; Yi, Sanli; Xiang, Yan; Ma, Lei; Xiong, Xin; He, Jianfeng

    2017-07-01

    Ultrasound images show a granular pattern of noise known as speckle that diminishes their quality and results in difficulties in diagnosis. To preserve edges and features, this paper proposes a fractional differentiation-based image operator to reduce speckle in ultrasound. An image de-noising model based on fractional partial differential equations with balance relation between k (gradient modulus threshold that controls the conduction) and v (the order of fractional differentiation) was constructed by the effective combination of fractional calculus theory and a partial differential equation, and the numerical algorithm of it was achieved using a fractional differential mask operator. The proposed algorithm has better speckle reduction and structure preservation than the three existing methods [P-M model, the speckle reducing anisotropic diffusion (SRAD) technique, and the detail preserving anisotropic diffusion (DPAD) technique]. And it is significantly faster than bilateral filtering (BF) in producing virtually the same experimental results. Ultrasound phantom testing and in vivo imaging show that the proposed method can improve the quality of an ultrasound image in terms of tissue SNR, CNR, and FOM values.

  18. Transformation elastodynamics and cloaking for flexural waves

    NASA Astrophysics Data System (ADS)

    Colquitt, D. J.; Brun, M.; Gei, M.; Movchan, A. B.; Movchan, N. V.; Jones, I. S.

    2014-12-01

    The paper addresses an important issue of cloaking transformations for fourth-order partial differential equations representing flexural waves in thin elastic plates. It is shown that, in contrast with the Helmholtz equation, the general form of the partial differential equation is not invariant with respect to the cloaking transformation. The significant result of this paper is the analysis of the transformed equation and its interpretation in the framework of the linear theory of pre-stressed plates. The paper provides a formal framework for transformation elastodynamics as applied to elastic plates. Furthermore, an algorithm is proposed for designing a broadband square cloak for flexural waves, which employs a regularised push-out transformation. Illustrative numerical examples show high accuracy and efficiency of the proposed cloaking algorithm. In particular, a physical configuration involving a perturbation of an interference pattern generated by two coherent sources is presented. It is demonstrated that the perturbation produced by a cloaked defect is negligibly small even for such a delicate interference pattern.

  19. Learning partial differential equations via data discovery and sparse optimization

    NASA Astrophysics Data System (ADS)

    Schaeffer, Hayden

    2017-01-01

    We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.

  20. Learning partial differential equations via data discovery and sparse optimization.

    PubMed

    Schaeffer, Hayden

    2017-01-01

    We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.

  1. Learning partial differential equations via data discovery and sparse optimization

    PubMed Central

    2017-01-01

    We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection. PMID:28265183

  2. A Contextualized, Differential Sequence Mining Method to Derive Students' Learning Behavior Patterns

    ERIC Educational Resources Information Center

    Kinnebrew, John S.; Loretz, Kirk M.; Biswas, Gautam

    2013-01-01

    Computer-based learning environments can produce a wealth of data on student learning interactions. This paper presents an exploratory data mining methodology for assessing and comparing students' learning behaviors from these interaction traces. The core algorithm employs a novel combination of sequence mining techniques to identify deferentially…

  3. New segmentation-based tone mapping algorithm for high dynamic range image

    NASA Astrophysics Data System (ADS)

    Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong

    2017-07-01

    The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.

  4. Gene expression pattern recognition algorithm inferences to classify samples exposed to chemical agents

    NASA Astrophysics Data System (ADS)

    Bushel, Pierre R.; Bennett, Lee; Hamadeh, Hisham; Green, James; Ableson, Alan; Misener, Steve; Paules, Richard; Afshari, Cynthia

    2002-06-01

    We present an analysis of pattern recognition procedures used to predict the classes of samples exposed to pharmacologic agents by comparing gene expression patterns from samples treated with two classes of compounds. Rat liver mRNA samples following exposure for 24 hours with phenobarbital or peroxisome proliferators were analyzed using a 1700 rat cDNA microarray platform. Sets of genes that were consistently differentially expressed in the rat liver samples following treatment were stored in the MicroArray Project System (MAPS) database. MAPS identified 238 genes in common that possessed a low probability (P < 0.01) of being randomly detected as differentially expressed at the 95% confidence level. Hierarchical cluster analysis on the 238 genes clustered specific gene expression profiles that separated samples based on exposure to a particular class of compound.

  5. Vasculitic wheel - an algorithmic approach to cutaneous vasculitides.

    PubMed

    Ratzinger, Gudrun; Zelger, Bettina Gudrun; Carlson, J Andrew; Burgdorf, Walter; Zelger, Bernhard

    2015-11-01

    Previous classifications of vasculitides suffer from several defects. First, classifications may follow different principles including clinicopathologic findings, etiology, pathogenesis, prognosis, or therapeutic options. Second, authors fail to distinguish between vasculitis and coagulopathy. Third, vasculitides are systemic diseases. Organ-specific variations make morphologic findings difficult to compare. Fourth, subtle changes are recognized in the skin, but may be asymptomatic in other organs. Our aim was to use the skin and subcutis as a model and the clinicopathologic correlation as the basic process for classification. We use an algorithmic approach with pattern analysis, which allows for consistent reporting of microscopic findings. We first differentiate between small and medium vessel vasculitis. In the second step, we differentiate the subtypes of small (capillaries versus postcapillary venules) and medium-sized (arterioles/arteries versus veins) vessels. In the final step, we differentiate, according to the predominant cell type, into leukocytoclastic and/or granulomatous vasculitis. Starting from leukocytoclastic vasculitis as a central reaction pattern of cutaneous small/medium vessel vasculitides, its relations or variations may be arranged around it like spokes of a wheel around the hub. This may help establish some basic order in this rather complex realm of cutaneous vasculitides, leading to a better understanding in a complicated field. © 2015 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.

  6. Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation.

    PubMed

    Zana, F; Klein, J C

    2001-01-01

    This paper presents an algorithm based on mathematical morphology and curvature evaluation for the detection of vessel-like patterns in a noisy environment. Such patterns are very common in medical images. Vessel detection is interesting for the computation of parameters related to blood flow. Its tree-like geometry makes it a usable feature for registration between images that can be of a different nature. In order to define vessel-like patterns, segmentation is performed with respect to a precise model. We define a vessel as a bright pattern, piece-wise connected, and locally linear, mathematical morphology is very well adapted to this description, however other patterns fit such a morphological description. In order to differentiate vessels from analogous background patterns, a cross-curvature evaluation is performed. They are separated out as they have a specific Gaussian-like profile whose curvature varies smoothly along the vessel. The detection algorithm that derives directly from this modeling is based on four steps: (1) noise reduction; (2) linear pattern with Gaussian-like profile improvement; (3) cross-curvature evaluation; (4) linear filtering. We present its theoretical background and illustrate it on real images of various natures, then evaluate its robustness and its accuracy with respect to noise.

  7. An algorithm for power line detection and warning based on a millimeter-wave radar video.

    PubMed

    Ma, Qirong; Goshi, Darren S; Shih, Yi-Chi; Sun, Ming-Ting

    2011-12-01

    Power-line-strike accident is a major safety threat for low-flying aircrafts such as helicopters, thus an automatic warning system to power lines is highly desirable. In this paper we propose an algorithm for detecting power lines from radar videos from an active millimeter-wave sensor. Hough Transform is employed to detect candidate lines. The major challenge is that the radar videos are very noisy due to ground return. The noise points could fall on the same line which results in signal peaks after Hough Transform similar to the actual cable lines. To differentiate the cable lines from the noise lines, we train a Support Vector Machine to perform the classification. We exploit the Bragg pattern, which is due to the diffraction of electromagnetic wave on the periodic surface of power lines. We propose a set of features to represent the Bragg pattern for the classifier. We also propose a slice-processing algorithm which supports parallel processing, and improves the detection of cables in a cluttered background. Lastly, an adaptive algorithm is proposed to integrate the detection results from individual frames into a reliable video detection decision, in which temporal correlation of the cable pattern across frames is used to make the detection more robust. Extensive experiments with real-world data validated the effectiveness of our cable detection algorithm. © 2011 IEEE

  8. On the Application of Pattern Recognition and AI Technique to the Cytoscreening of Vaginal Smears by Computer

    NASA Astrophysics Data System (ADS)

    Bow, Sing T.; Wang, Xia-Fang

    1989-05-01

    In this paper the concepts of pattern recognition, image processing and artificial intelligence are applied to the development of an intelligent cytoscreening system to differentiate the abnormal cytological objects from the normal ones in vaginal smears. To achieve this goal,work listed below are involved: 1. Enhancement of the microscopic images of the smears; 2. Elevation of the qualitative differentiation under the microscope by cytologists to a quantitative differentiation plateau on the epithelial cells, ciliated cells, vacuolated cells, foreign-body-giant cells, plasma cells, lymph cells, white blood cells, red blood cells, etc. These knowledges are to be inputted into our intelligent cyto-screening system to ameliorate machine differentiation; 3. Selection of a set of effective features to characterize the cytological objects onto various regions of the multiclustered by computer algorithms; and 4. Systematical summarization of the knowledge that a gynecologist has and the way he/she follows when dealing with a case.

  9. New subtraction algorithms for evaluation of lesions on dynamic contrast-enhanced MR mammography.

    PubMed

    Choi, Byung Gil; Kim, Hak Hee; Kim, Euy Neyng; Kim, Bum-soo; Han, Ji-Youn; Yoo, Seung-Schik; Park, Seog Hee

    2002-12-01

    We report new subtraction algorithms for the detection of lesions in dynamic contrast-enhanced MR mammography(CE MRM). Twenty-five patients with suspicious breast lesions underwent dynamic CE MRM using 3D fast low-angle shot. After the acquisition of the T1-weighted scout images, dynamic images were acquired six times after the bolus injection of contrast media. Serial subtractions, step-by-step subtractions, and reverse subtractions, were performed. Two radiologists attempted to differentiate benign from malignant lesion in consensus. The sensitivity, specificity, and accuracy of the method leading to the differentiation of malignant tumor from benign lesions were 85.7, 100, and 96%, respectively. Subtraction images allowed for better visualization of the enhancement as well as its temporal pattern than visual inspection of dynamic images alone. Our findings suggest that the new subtraction algorithm is adequate for screening malignant breast lesions and can potentially replace the time-intensity profile analysis on user-selected regions of interest.

  10. Investigation of gene expressions in differentiated cell derived bone marrow stem cells during bone morphogenetic protein-4 treatments with Fourier transform infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Zafari, Jaber; Jouni, Fatemeh Javani; Ahmadvand, Ali; Abdolmaleki, Parviz; Soodi, Malihe; Zendehdel, Rezvan

    2017-02-01

    A model was set up to predict the differentiation patterns based on the data extracted from FTIR spectroscopy. For this reason, bone marrow stem cells (BMSCs) were differentiated to primordial germ cells (PGCs). Changes in cellular macromolecules in the time of 0, 24, 48, 72, and 96 h of differentiation, as different steps of the differentiation procedure were investigated by using FTIR spectroscopy. Also, the expression of pluripotency (Oct-4, Nanog and c-Myc) and specific genes (Mvh, Stella and Fragilis) were investigated by real-time PCR. However, the expression of genes in five steps of differentiation was predicted by FTIR spectroscopy. FTIR spectra showed changes in the template of band intensities at different differentiation steps. There are increasing changes in the stepwise differentiation procedure for the ratio area of CH2, which is symmetric to CH2 asymmetric stretching. An ensemble of expert methods, including regression tree (RT), boosting algorithm (BA), and generalized regression neural network (GRNN), was the best method to predict the gene expression by FTIR spectroscopy. In conclusion, the model was able to distinguish the pattern of different steps from cell differentiation by using some useful features extracted from FTIR spectra.

  11. Recognition of anaerobic bacterial isolates in vitro using electronic nose technology.

    PubMed

    Pavlou, A; Turner, A P F; Magan, N

    2002-01-01

    Use of an electronic nose (e.nose) system to differentiation between anaerobic bacteria grown in vitro on agar media. Cultures of Clostridium spp. (14 strains) and Bacteroides fragilis (12 strains) were grown on blood agar plates and incubated in sampling bags for 30 min before head space analysis of the volatiles. Qualitative analyses of the volatile production patterns was carried out using an e.nose system with 14 conducting polymer sensors. Using data analysis techniques such as principal components analysis (PCA), genetic algorithms and neural networks it was possible to differentiate between agar blanks and individual species which accounted for all the data. A total of eight unknowns were correctly discriminated into the bacterial groups. This is the first report of in vitro complex volatile pattern recognition and differentiation of anaerobic pathogens. These results suggest the potential for application of e.nose technology in early diagnosis of microbial pathogens of medical importance.

  12. Genetic algorithm with maximum-minimum crossover (GA-MMC) applied in optimization of radiation pattern control of phased-array radars for rocket tracking systems.

    PubMed

    Silva, Leonardo W T; Barros, Vitor F; Silva, Sandro G

    2014-08-18

    In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence.

  13. Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) Applied in Optimization of Radiation Pattern Control of Phased-Array Radars for Rocket Tracking Systems

    PubMed Central

    Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.

    2014-01-01

    In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013

  14. Irrigation water allocation optimization using multi-objective evolutionary algorithm (MOEA) - a review

    NASA Astrophysics Data System (ADS)

    Fanuel, Ibrahim Mwita; Mushi, Allen; Kajunguri, Damian

    2018-03-01

    This paper analyzes more than 40 papers with a restricted area of application of Multi-Objective Genetic Algorithm, Non-Dominated Sorting Genetic Algorithm-II and Multi-Objective Differential Evolution (MODE) to solve the multi-objective problem in agricultural water management. The paper focused on different application aspects which include water allocation, irrigation planning, crop pattern and allocation of available land. The performance and results of these techniques are discussed. The review finds that there is a potential to use MODE to analyzed the multi-objective problem, the application is more significance due to its advantage of being simple and powerful technique than any Evolutionary Algorithm. The paper concludes with the hopeful new trend of research that demand effective use of MODE; inclusion of benefits derived from farm byproducts and production costs into the model.

  15. Oscillatory neural network for pattern recognition: trajectory based classification and supervised learning.

    PubMed

    Miller, Vonda H; Jansen, Ben H

    2008-12-01

    Computer algorithms that match human performance in recognizing written text or spoken conversation remain elusive. The reasons why the human brain far exceeds any existing recognition scheme to date in the ability to generalize and to extract invariant characteristics relevant to category matching are not clear. However, it has been postulated that the dynamic distribution of brain activity (spatiotemporal activation patterns) is the mechanism by which stimuli are encoded and matched to categories. This research focuses on supervised learning using a trajectory based distance metric for category discrimination in an oscillatory neural network model. Classification is accomplished using a trajectory based distance metric. Since the distance metric is differentiable, a supervised learning algorithm based on gradient descent is demonstrated. Classification of spatiotemporal frequency transitions and their relation to a priori assessed categories is shown along with the improved classification results after supervised training. The results indicate that this spatiotemporal representation of stimuli and the associated distance metric is useful for simple pattern recognition tasks and that supervised learning improves classification results.

  16. Quantification of susceptibility change at high-concentrated SPIO-labeled target by characteristic phase gradient recognition.

    PubMed

    Zhu, Haitao; Nie, Binbin; Liu, Hua; Guo, Hua; Demachi, Kazuyuki; Sekino, Masaki; Shan, Baoci

    2016-05-01

    Phase map cross-correlation detection and quantification may produce highlighted signal at superparamagnetic iron oxide nanoparticles, and distinguish them from other hypointensities. The method may quantify susceptibility change by performing least squares analysis between a theoretically generated magnetic field template and an experimentally scanned phase image. Because characteristic phase recognition requires the removal of phase wrap and phase background, additional steps of phase unwrapping and filtering may increase the chance of computing error and enlarge the inconsistence among algorithms. To solve problem, phase gradient cross-correlation and quantification method is developed by recognizing characteristic phase gradient pattern instead of phase image because phase gradient operation inherently includes unwrapping and filtering functions. However, few studies have mentioned the detectable limit of currently used phase gradient calculation algorithms. The limit may lead to an underestimation of large magnetic susceptibility change caused by high-concentrated iron accumulation. In this study, mathematical derivation points out the value of maximum detectable phase gradient calculated by differential chain algorithm in both spatial and Fourier domain. To break through the limit, a modified quantification method is proposed by using unwrapped forward differentiation for phase gradient generation. The method enlarges the detectable range of phase gradient measurement and avoids the underestimation of magnetic susceptibility. Simulation and phantom experiments were used to quantitatively compare different methods. In vivo application performs MRI scanning on nude mice implanted by iron-labeled human cancer cells. Results validate the limit of detectable phase gradient and the consequent susceptibility underestimation. Results also demonstrate the advantage of unwrapped forward differentiation compared with differential chain algorithms for susceptibility quantification at high-concentrated iron accumulation. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Selective Sensing of Gas Mixture via a Temperature Modulation Approach: New Strategy for Potentiometric Gas Sensor Obtaining Satisfactory Discriminating Features.

    PubMed

    Li, Fu-An; Jin, Han; Wang, Jinxia; Zou, Jie; Jian, Jiawen

    2017-03-12

    A new strategy to discriminate four types of hazardous gases is proposed in this research. Through modulating the operating temperature and the processing response signal with a pattern recognition algorithm, a gas sensor consisting of a single sensing electrode, i.e., ZnO/In₂O₃ composite, is designed to differentiate NO₂, NH₃, C₃H₆, CO within the level of 50-400 ppm. Results indicate that with adding 15 wt.% ZnO to In₂O₃, the sensor fabricated at 900 °C shows optimal sensing characteristics in detecting all the studied gases. Moreover, with the aid of the principle component analysis (PCA) algorithm, the sensor operating in the temperature modulation mode demonstrates acceptable discrimination features. The satisfactory discrimination features disclose the future that it is possible to differentiate gas mixture efficiently through operating a single electrode sensor at temperature modulation mode.

  18. Symbolic Solution of Linear Differential Equations

    NASA Technical Reports Server (NTRS)

    Feinberg, R. B.; Grooms, R. G.

    1981-01-01

    An algorithm for solving linear constant-coefficient ordinary differential equations is presented. The computational complexity of the algorithm is discussed and its implementation in the FORMAC system is described. A comparison is made between the algorithm and some classical algorithms for solving differential equations.

  19. Neuromuscular imaging in inherited muscle diseases

    PubMed Central

    Kley, Rudolf A.; Fischer, Dirk

    2010-01-01

    Driven by increasing numbers of newly identified genetic defects and new insights into the field of inherited muscle diseases, neuromuscular imaging in general and magnetic resonance imaging (MRI) in particular are increasingly being used to characterise the severity and pattern of muscle involvement. Although muscle biopsy is still the gold standard for the establishment of the definitive diagnosis, muscular imaging is an important diagnostic tool for the detection and quantification of dystrophic changes during the clinical workup of patients with hereditary muscle diseases. MRI is frequently used to describe muscle involvement patterns, which aids in narrowing of the differential diagnosis and distinguishing between dystrophic and non-dystrophic diseases. Recent work has demonstrated the usefulness of muscle imaging for the detection of specific congenital myopathies, mainly for the identification of the underlying genetic defect in core and centronuclear myopathies. Muscle imaging demonstrates characteristic patterns, which can be helpful for the differentiation of individual limb girdle muscular dystrophies. The aim of this review is to give a comprehensive overview of current methods and applications as well as future perspectives in the field of neuromuscular imaging in inherited muscle diseases. We also provide diagnostic algorithms that might guide us through the differential diagnosis in hereditary myopathies. PMID:20422195

  20. Computations involving differential operators and their actions on functions

    NASA Technical Reports Server (NTRS)

    Crouch, Peter E.; Grossman, Robert; Larson, Richard

    1991-01-01

    The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.

  1. Validation of classification algorithms for childhood diabetes identified from administrative data.

    PubMed

    Vanderloo, Saskia E; Johnson, Jeffrey A; Reimer, Kim; McCrea, Patrick; Nuernberger, Kimberly; Krueger, Hans; Aydede, Sema K; Collet, Jean-Paul; Amed, Shazhan

    2012-05-01

    Type 1 diabetes is the most common form of diabetes among children; however, the proportion of cases of childhood type 2 diabetes is increasing. In Canada, the National Diabetes Surveillance System (NDSS) uses administrative health data to describe trends in the epidemiology of diabetes, but does not specify diabetes type. The objective of this study was to validate algorithms to classify diabetes type in children <20 yr identified using the NDSS methodology. We applied the NDSS case definition to children living in British Columbia between 1 April 1996 and 31 March 2007. Through an iterative process, four potential classification algorithms were developed based on demographic characteristics and drug-utilization patterns. Each algorithm was then validated against a gold standard clinical database. Algorithms based primarily on an age rule (i.e., age <10 at diagnosis categorized type 1 diabetes) were most sensitive in the identification of type 1 diabetes; algorithms with restrictions on drug utilization (i.e., no prescriptions for insulin ± glucose monitoring strips categorized type 2 diabetes) were most sensitive for identifying type 2 diabetes. One algorithm was identified as having the optimal balance of sensitivity (Sn) and specificity (Sp) for the identification of both type 1 (Sn: 98.6%; Sp: 78.2%; PPV: 97.8%) and type 2 diabetes (Sn: 83.2%; Sp: 97.5%; PPV: 73.7%). Demographic characteristics in combination with drug-utilization patterns can be used to differentiate diabetes type among cases of pediatric diabetes identified within administrative health databases. Validation of similar algorithms in other regions is warranted. © 2011 John Wiley & Sons A/S.

  2. A comparison of various algorithms to extract Magic Formula tyre model coefficients for vehicle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.

    2015-02-01

    Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.

  3. Integrated Analysis of Alzheimer's Disease and Schizophrenia Dataset Revealed Different Expression Pattern in Learning and Memory.

    PubMed

    Li, Wen-Xing; Dai, Shao-Xing; Liu, Jia-Qian; Wang, Qian; Li, Gong-Hua; Huang, Jing-Fei

    2016-01-01

    Alzheimer's disease (AD) and schizophrenia (SZ) are both accompanied by impaired learning and memory functions. This study aims to explore the expression profiles of learning or memory genes between AD and SZ. We downloaded 10 AD and 10 SZ datasets from GEO-NCBI for integrated analysis. These datasets were processed using RMA algorithm and a global renormalization for all studies. Then Empirical Bayes algorithm was used to find the differentially expressed genes between patients and controls. The results showed that most of the differentially expressed genes were related to AD whereas the gene expression profile was little affected in the SZ. Furthermore, in the aspects of the number of differentially expressed genes, the fold change and the brain region, there was a great difference in the expression of learning or memory related genes between AD and SZ. In AD, the CALB1, GABRA5, and TAC1 were significantly downregulated in whole brain, frontal lobe, temporal lobe, and hippocampus. However, in SZ, only two genes CRHBP and CX3CR1 were downregulated in hippocampus, and other brain regions were not affected. The effect of these genes on learning or memory impairment has been widely studied. It was suggested that these genes may play a crucial role in AD or SZ pathogenesis. The different gene expression patterns between AD and SZ on learning and memory functions in different brain regions revealed in our study may help to understand the different mechanism between two diseases.

  4. Computational gene expression profiling under salt stress reveals patterns of co-expression

    PubMed Central

    Sanchita; Sharma, Ashok

    2016-01-01

    Plants respond differently to environmental conditions. Among various abiotic stresses, salt stress is a condition where excess salt in soil causes inhibition of plant growth. To understand the response of plants to the stress conditions, identification of the responsible genes is required. Clustering is a data mining technique used to group the genes with similar expression. The genes of a cluster show similar expression and function. We applied clustering algorithms on gene expression data of Solanum tuberosum showing differential expression in Capsicum annuum under salt stress. The clusters, which were common in multiple algorithms were taken further for analysis. Principal component analysis (PCA) further validated the findings of other cluster algorithms by visualizing their clusters in three-dimensional space. Functional annotation results revealed that most of the genes were involved in stress related responses. Our findings suggest that these algorithms may be helpful in the prediction of the function of co-expressed genes. PMID:26981411

  5. Remote Sensing of Particulate Organic Carbon Pools in the High-Latitude Oceans

    NASA Technical Reports Server (NTRS)

    Stramski, Dariusz; Stramska, Malgorzata

    2005-01-01

    The general goal of this project was to characterize spatial distributions at basin scales and variability on monthly to interannual timescales of particulate organic carbon (POC) in the high-latitude oceans. The primary objectives were: (1) To collect in situ data in the north polar waters of the Atlantic and in the Southern Ocean, necessary for the derivation of POC ocean color algorithms for these regions. (2) To derive regional POC algorithms and refine existing regional chlorophyll (Chl) algorithms, to develop understanding of processes that control bio-optical relationships underlying ocean color algorithms for POC and Chl, and to explain bio-optical differentiation between the examined polar regions and within the regions. (3) To determine basin-scale spatial patterns and temporal variability on monthly to interannual scales in satellite-derived estimates of POC and Chl pools in the investigated regions for the period of time covered by SeaWiFS and MODIS missions.

  6. Diffractive shear interferometry for extreme ultraviolet high-resolution lensless imaging

    NASA Astrophysics Data System (ADS)

    Jansen, G. S. M.; de Beurs, A.; Liu, X.; Eikema, K. S. E.; Witte, S.

    2018-05-01

    We demonstrate a novel imaging approach and associated reconstruction algorithm for far-field coherent diffractive imaging, based on the measurement of a pair of laterally sheared diffraction patterns. The differential phase profile retrieved from such a measurement leads to improved reconstruction accuracy, increased robustness against noise, and faster convergence compared to traditional coherent diffractive imaging methods. We measure laterally sheared diffraction patterns using Fourier-transform spectroscopy with two phase-locked pulse pairs from a high harmonic source. Using this approach, we demonstrate spectrally resolved imaging at extreme ultraviolet wavelengths between 28 and 35 nm.

  7. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds.

    PubMed

    Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.

  8. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds

    PubMed Central

    Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds. PMID:27974884

  9. Accounting for cell lineage and sex effects in the identification of cell-specific DNA methylation using a Bayesian model selection algorithm.

    PubMed

    White, Nicole; Benton, Miles; Kennedy, Daniel; Fox, Andrew; Griffiths, Lyn; Lea, Rodney; Mengersen, Kerrie

    2017-01-01

    Cell- and sex-specific differences in DNA methylation are major sources of epigenetic variation in whole blood. Heterogeneity attributable to cell type has motivated the identification of cell-specific methylation at the CpG level, however statistical methods for this purpose have been limited to pairwise comparisons between cell types or between the cell type of interest and whole blood. We developed a Bayesian model selection algorithm for the identification of cell-specific methylation profiles that incorporates knowledge of shared cell lineage and allows for the identification of differential methylation profiles in one or more cell types simultaneously. Under the proposed methodology, sex-specific differences in methylation by cell type are also assessed. Using publicly available, cell-sorted methylation data, we show that 51.3% of female CpG markers and 61.4% of male CpG markers identified were associated with differential methylation in more than one cell type. The impact of cell lineage on differential methylation was also highlighted. An evaluation of sex-specific differences revealed differences in CD56+NK methylation, within both single and multi- cell dependent methylation patterns. Our findings demonstrate the need to account for cell lineage in studies of differential methylation and associated sex effects.

  10. Muscular MRI-based algorithm to differentiate inherited myopathies presenting with spinal rigidity.

    PubMed

    Tordjman, Mickael; Dabaj, Ivana; Laforet, Pascal; Felter, Adrien; Ferreiro, Ana; Biyoukar, Moustafa; Law-Ye, Bruno; Zanoteli, Edmar; Castiglioni, Claudia; Rendu, John; Beroud, Christophe; Chamouni, Alexandre; Richard, Pascale; Mompoint, Dominique; Quijano-Roy, Susana; Carlier, Robert-Yves

    2018-05-25

    Inherited myopathies are major causes of muscle atrophy and are often characterized by rigid spine syndrome, a clinical feature designating patients with early spinal contractures. We aim to present a decision algorithm based on muscular whole body magnetic resonance imaging (mWB-MRI) as a unique tool to orientate the diagnosis of each inherited myopathy long before the genetically confirmed diagnosis. This multicentre retrospective study enrolled 79 patients from referral centres in France, Brazil and Chile. The patients underwent 1.5-T or 3-T mWB-MRI. The protocol comprised STIR and T1 sequences in axial and coronal planes, from head to toe. All images were analyzed manually by multiple raters. Fatty muscle replacement was evaluated on mWB-MRI using both the Mercuri scale and statistical comparison based on the percentage of affected muscle. Between February 2005 and December 2015, 76 patients with genetically confirmed inherited myopathy were included. They were affected by Pompe disease or harbored mutations in RYR1, Collagen VI, LMNA, SEPN1, LAMA2 and MYH7 genes. Each myopathy had a specific pattern of affected muscles recognizable on mWB-MRI. This allowed us to create a novel decision algorithm for patients with rigid spine syndrome by segregating these signs. This algorithm was validated by five external evaluators on a cohort of seven patients with a diagnostic accuracy of 94.3% compared with the genetic diagnosis. We provide a novel decision algorithm based on muscle fat replacement graded on mWB-MRI that allows diagnosis and differentiation of inherited myopathies presenting with spinal rigidity. • Inherited myopathies are rare, diagnosis is challenging and genetic tests require specialized centres and often take years. • Inherited myopathies are often characterized by spinal rigidity. • Whole body magnetic resonance imaging is a unique tool to orientate the diagnosis of each inherited myopathy presenting with spinal rigidity. • Each inherited myopathy in this study has a specific pattern of affected muscles that orientate diagnosis. • A novel MRI-based algorithm, usable by every radiologist, can help the early diagnosis of these myopathies.

  11. Differentiation of closely related isomers: application of data mining techniques in conjunction with variable wavelength infrared multiple photon dissociation mass spectrometry for identification of glucose-containing disaccharide ions.

    PubMed

    Stefan, Sarah E; Ehsan, Mohammad; Pearson, Wright L; Aksenov, Alexander; Boginski, Vladimir; Bendiak, Brad; Eyler, John R

    2011-11-15

    Data mining algorithms have been used to analyze the infrared multiple photon dissociation (IRMPD) patterns of gas-phase lithiated disaccharide isomers irradiated with either a line-tunable CO(2) laser or a free electron laser (FEL). The IR fragmentation patterns over the wavelength range of 9.2-10.6 μm have been shown in earlier work to correlate uniquely with the asymmetry at the anomeric carbon in each disaccharide. Application of data mining approaches for data analysis allowed unambiguous determination of the anomeric carbon configurations for each disaccharide isomer pair using fragmentation data at a single wavelength. In addition, the linkage positions were easily assigned. This combination of wavelength-selective IRMPD and data mining offers a powerful and convenient tool for differentiation of structurally closely related isomers, including those of gas-phase carbohydrate complexes.

  12. Post-OPC verification using a full-chip pattern-based simulation verification method

    NASA Astrophysics Data System (ADS)

    Hung, Chi-Yuan; Wang, Ching-Heng; Ma, Cliff; Zhang, Gary

    2005-11-01

    In this paper, we evaluated and investigated techniques for performing fast full-chip post-OPC verification using a commercial product platform. A number of databases from several technology nodes, i.e. 0.13um, 0.11um and 90nm are used in the investigation. Although it has proven that for most cases, our OPC technology is robust in general, due to the variety of tape-outs with complicated design styles and technologies, it is difficult to develop a "complete or bullet-proof" OPC algorithm that would cover every possible layout patterns. In the evaluation, among dozens of databases, some OPC databases were found errors by Model-based post-OPC checking, which could cost significantly in manufacturing - reticle, wafer process, and more importantly the production delay. From such a full-chip OPC database verification, we have learned that optimizing OPC models and recipes on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. And, fatal errors (such as pinch or bridge) or poor CD distribution and process-sensitive patterns may still occur. As a result, more than one reticle tape-out cycle is not uncommon to prove models and recipes that approach the center of process for a range of designs. So, we will describe a full-chip pattern-based simulation verification flow serves both OPC model and recipe development as well as post OPC verification after production release of the OPC. Lastly, we will discuss the differentiation of the new pattern-based and conventional edge-based verification tools and summarize the advantages of our new tool and methodology: 1). Accuracy: Superior inspection algorithms, down to 1nm accuracy with the new "pattern based" approach 2). High speed performance: Pattern-centric algorithms to give best full-chip inspection efficiency 3). Powerful analysis capability: Flexible error distribution, grouping, interactive viewing and hierarchical pattern extraction to narrow down to unique patterns/cells.

  13. Low power, lightweight vapor sensing using arrays of conducting polymer composite chemically-sensitive resistors

    NASA Technical Reports Server (NTRS)

    Ryan, M. A.; Lewis, N. S.

    2001-01-01

    Arrays of broadly responsive vapor detectors can be used to detect, identify, and quantify vapors and vapor mixtures. One implementation of this strategy involves the use of arrays of chemically-sensitive resistors made from conducting polymer composites. Sorption of an analyte into the polymer composite detector leads to swelling of the film material. The swelling is in turn transduced into a change in electrical resistance because the detector films consist of polymers filled with conducting particles such as carbon black. The differential sorption, and thus differential swelling, of an analyte into each polymer composite in the array produces a unique pattern for each different analyte of interest, Pattern recognition algorithms are then used to analyze the multivariate data arising from the responses of such a detector array. Chiral detector films can provide differential detection of the presence of certain chiral organic vapor analytes. Aspects of the spaceflight qualification and deployment of such a detector array, along with its performance for certain analytes of interest in manned life support applications, are reviewed and summarized in this article.

  14. Selective Sensing of Gas Mixture via a Temperature Modulation Approach: New Strategy for Potentiometric Gas Sensor Obtaining Satisfactory Discriminating Features

    PubMed Central

    Li, Fu-an; Jin, Han; Wang, Jinxia; Zou, Jie; Jian, Jiawen

    2017-01-01

    A new strategy to discriminate four types of hazardous gases is proposed in this research. Through modulating the operating temperature and the processing response signal with a pattern recognition algorithm, a gas sensor consisting of a single sensing electrode, i.e., ZnO/In2O3 composite, is designed to differentiate NO2, NH3, C3H6, CO within the level of 50–400 ppm. Results indicate that with adding 15 wt.% ZnO to In2O3, the sensor fabricated at 900 °C shows optimal sensing characteristics in detecting all the studied gases. Moreover, with the aid of the principle component analysis (PCA) algorithm, the sensor operating in the temperature modulation mode demonstrates acceptable discrimination features. The satisfactory discrimination features disclose the future that it is possible to differentiate gas mixture efficiently through operating a single electrode sensor at temperature modulation mode. PMID:28287492

  15. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  16. SLA-aware differentiated QoS in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Agrawal, Anuj; Vyas, Upama; Bhatia, Vimal; Prakash, Shashi

    2017-07-01

    The quality of service (QoS) offered by optical networks can be improved by accurate provisioning of service level specifications (SLSs) included in the service level agreement (SLA). A large number of users coexisting in the network require different services. Thus, a pragmatic network needs to offer a differentiated QoS to a variety of users according to the SLA contracted for different services at varying costs. In conventional wavelength division multiplexed (WDM) optical networks, service differentiation is feasible only for a limited number of users because of its fixed-grid structure. Newly introduced flex-grid based elastic optical networks (EONs) are more adaptive to traffic requirements as compared to the WDM networks because of the flexibility in their grid structure. Thus, we propose an efficient SLA provisioning algorithm with improved QoS for these flex-grid EONs empowered by optical orthogonal frequency division multiplexing (O-OFDM). The proposed algorithm, called SLA-aware differentiated QoS (SADQ), employs differentiation at the level of routing, spectrum allocation, and connection survivability. The proposed SADQ aims to accurately provision the SLA using such multilevel differentiation with an objective to improve the spectrum utilization from the network operator's perspective. SADQ is evaluated for three different CoSs under various traffic demand patterns and for different ratios of the number of requests belonging to the three considered CoSs. We propose two new SLA metrics for the improvement of functional QoS requirements, namely, security, confidentiality and survivability of high class of service (CoS) traffic. Since, to the best of our knowledge, the proposed SADQ is the first scheme in optical networks to employ exhaustive differentiation at the levels of routing, spectrum allocation, and survivability in a single algorithm, we first compare the performance of SADQ in EON and currently deployed WDM networks to assess the differentiation capability of EON and WDM networks under such differentiated service environment. The proposed SADQ is then compared with two existing benchmark routing and spectrum allocation (RSA) schemes that are also designed under EONs. Simulations indicate that the performance of SADQ is distinctly better in EON than in WDM network under differentiated QoS scenario. The comparative analysis of the proposed SADQ with the considered benchmark RSA strategies designed under EON shows the improved performance of SADQ in EON paradigm for offering differentiated services as per the SLA.

  17. Mechanobiological simulations of peri-acetabular bone ingrowth: a comparative analysis of cell-phenotype specific and phenomenological algorithms.

    PubMed

    Mukherjee, Kaushik; Gupta, Sanjay

    2017-03-01

    Several mechanobiology algorithms have been employed to simulate bone ingrowth around porous coated implants. However, there is a scarcity of quantitative comparison between the efficacies of commonly used mechanoregulatory algorithms. The objectives of this study are: (1) to predict peri-acetabular bone ingrowth using cell-phenotype specific algorithm and to compare these predictions with those obtained using phenomenological algorithm and (2) to investigate the influences of cellular parameters on bone ingrowth. The variation in host bone material property and interfacial micromotion of the implanted pelvis were mapped onto the microscale model of implant-bone interface. An overall variation of 17-88 % in peri-acetabular bone ingrowth was observed. Despite differences in predicted tissue differentiation patterns during the initial period, both the algorithms predicted similar spatial distribution of neo-tissue layer, after attainment of equilibrium. Results indicated that phenomenological algorithm, being computationally faster than the cell-phenotype specific algorithm, might be used to predict peri-prosthetic bone ingrowth. The cell-phenotype specific algorithm, however, was found to be useful in numerically investigating the influence of alterations in cellular activities on bone ingrowth, owing to biologically related factors. Amongst the host of cellular activities, matrix production rate of bone tissue was found to have predominant influence on peri-acetabular bone ingrowth.

  18. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2013-01-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658

  19. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.

    PubMed

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2012-02-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

  20. A simple and robust classification tree for differentiation between benign and malignant lesions in MR-mammography.

    PubMed

    Baltzer, Pascal A T; Dietzel, Matthias; Kaiser, Werner A

    2013-08-01

    In the face of multiple available diagnostic criteria in MR-mammography (MRM), a practical algorithm for lesion classification is needed. Such an algorithm should be as simple as possible and include only important independent lesion features to differentiate benign from malignant lesions. This investigation aimed to develop a simple classification tree for differential diagnosis in MRM. A total of 1,084 lesions in standardised MRM with subsequent histological verification (648 malignant, 436 benign) were investigated. Seventeen lesion criteria were assessed by 2 readers in consensus. Classification analysis was performed using the chi-squared automatic interaction detection (CHAID) method. Results include the probability for malignancy for every descriptor combination in the classification tree. A classification tree incorporating 5 lesion descriptors with a depth of 3 ramifications (1, root sign; 2, delayed enhancement pattern; 3, border, internal enhancement and oedema) was calculated. Of all 1,084 lesions, 262 (40.4 %) and 106 (24.3 %) could be classified as malignant and benign with an accuracy above 95 %, respectively. Overall diagnostic accuracy was 88.4 %. The classification algorithm reduced the number of categorical descriptors from 17 to 5 (29.4 %), resulting in a high classification accuracy. More than one third of all lesions could be classified with accuracy above 95 %. • A practical algorithm has been developed to classify lesions found in MR-mammography. • A simple decision tree consisting of five criteria reaches high accuracy of 88.4 %. • Unique to this approach, each classification is associated with a diagnostic certainty. • Diagnostic certainty of greater than 95 % is achieved in 34 % of all cases.

  1. Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning

    PubMed Central

    Kok, Kai Yit; Rajendran, Parvathy

    2016-01-01

    The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost. PMID:26943630

  2. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  3. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    PubMed

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  4. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  5. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  6. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  7. The effect of 18F-FDG-PET image reconstruction algorithms on the expression of characteristic metabolic brain network in Parkinson's disease.

    PubMed

    Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja

    2017-09-01

    To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (p<0.0001), regardless of reconstruction algorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, p<0.0001). Average differences in the PDRP expression among different algorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Automatic differentiation of melanoma and clark nevus skin lesions

    NASA Astrophysics Data System (ADS)

    LeAnder, R. W.; Kasture, A.; Pandey, A.; Umbaugh, S. E.

    2007-03-01

    Skin cancer is the most common form of cancer in the United States. Although melanoma accounts for just 11% of all types of skin cancer, it is responsible for most of the deaths, claiming more than 7910 lives annually. Melanoma is visually difficult for clinicians to differentiate from Clark nevus lesions which are benign. The application of pattern recognition techniques to these lesions may be useful as an educational tool for teaching physicians to differentiate lesions, as well as for contributing information about the essential optical characteristics that identify them. Purpose: This study sought to find the most effective features to extract from melanoma, melanoma in situ and Clark nevus lesions, and to find the most effective pattern-classification criteria and algorithms for differentiating those lesions, using the Computer Vision and Image Processing Tools (CVIPtools) software package. Methods: Due to changes in ambient lighting during the photographic process, color differences between images can occur. These differences were minimized by capturing dermoscopic images instead of photographic images. Differences in skin color between patients were minimized via image color normalization, by converting original color images to relative-color images. Relative-color images also helped minimize changes in color that occur due to changes in the photographic and digitization processes. Tumors in the relative-color images were segmented and morphologically filtered. Filtered, relative-color, tumor features were then extracted and various pattern-classification schemes were applied. Results: Experimentation resulted in four useful pattern classification methods, the best of which was an overall classification rate of 100% for melanoma and melanoma in situ (grouped) and 60% for Clark nevus. Conclusion: Melanoma and melanoma in situ have feature parameters and feature values that are similar enough to be considered one class of tumor that significantly differs from Clark nevus. Consequently, grouping melanoma and melanoma in situ together achieves the best results in classifying and automatically differentiating melanoma from Clark nevus lesions.

  9. Algorithms for Differential Games with Bounded Control and States.

    DTIC Science & Technology

    1982-03-01

    D-R124 642 ALGORITHMS FOR DIFFERENTIAL GAMES WI1TH BOUNDED CONTROL 1/2 AND STATES(U) CALIFORNIA UNIV LOS ANGELES SCHOOL OF ENGINEERING AND APPLIED...RECIPILNT’S CATALOG NUMBER None ~_________ TITLE (end Subtitle) S. TYPE OF REPORT P ERIOD COVERED ALGORITHMS FOR DIFFERENTIAL GAMES WITH Final, 11/29/79-11/28...problems are probably the most natural application of differential game theory and have been treated by many authors as such. Very few problems of this

  10. Brachydactyly E: isolated or as a feature of a syndrome.

    PubMed

    Pereda, Arrate; Garin, Intza; Garcia-Barcina, Maria; Gener, Blanca; Beristain, Elena; Ibañez, Ane Miren; Perez de Nanclares, Guiomar

    2013-09-12

    Brachydactyly (BD) refers to the shortening of the hands, feet or both. There are different types of BD; among them, type E (BDE) is a rare type that can present as an isolated feature or as part of more complex syndromes, such as: pseudohypopthyroidism (PHP), hypertension with BD or Bilginturan BD (HTNB), BD with mental retardation (BDMR) or BDE with short stature, PTHLH type. Each syndrome has characteristic patterns of skeletal involvement. However, brachydactyly is not a constant feature and shows a high degree of phenotypic variability. In addition, there are other syndromes that can be misdiagnosed as brachydactyly type E, some of which will also be discussed. The objective of this review is to describe some of the syndromes in which BDE is present, focusing on clinical, biochemical and genetic characteristics as features of differential diagnoses, with the aim of establishing an algorithm for their differential diagnosis. As in our experience many of these patients are recruited at Endocrinology and/or Pediatric Endocrinology Services due to their short stature, we have focused the algorithm in those steps that could mainly help these professionals.

  11. Brachydactyly E: isolated or as a feature of a syndrome

    PubMed Central

    2013-01-01

    Brachydactyly (BD) refers to the shortening of the hands, feet or both. There are different types of BD; among them, type E (BDE) is a rare type that can present as an isolated feature or as part of more complex syndromes, such as: pseudohypopthyroidism (PHP), hypertension with BD or Bilginturan BD (HTNB), BD with mental retardation (BDMR) or BDE with short stature, PTHLH type. Each syndrome has characteristic patterns of skeletal involvement. However, brachydactyly is not a constant feature and shows a high degree of phenotypic variability. In addition, there are other syndromes that can be misdiagnosed as brachydactyly type E, some of which will also be discussed. The objective of this review is to describe some of the syndromes in which BDE is present, focusing on clinical, biochemical and genetic characteristics as features of differential diagnoses, with the aim of establishing an algorithm for their differential diagnosis. As in our experience many of these patients are recruited at Endocrinology and/or Pediatric Endocrinology Services due to their short stature, we have focused the algorithm in those steps that could mainly help these professionals. PMID:24028571

  12. Entropy-based divergent and convergent modular pattern reveals additive and synergistic anticerebral ischemia mechanisms.

    PubMed

    Yu, Yanan; Zhang, Xiaoxu; Li, Bing; Zhang, Yingying; Liu, Jun; Li, Haixia; Chen, Yinying; Wang, Pengqian; Kang, Ruixia; Wu, Hongli; Wang, Zhong

    2016-12-01

    Module-based network analysis of diverse pharmacological mechanisms is critical to systematically understand combination therapies and disease outcomes. We first constructed drug-target ischemic networks in baicalin, jasminoidin, ursodeoxycholic acid, and their combinations baicalin and jasminoidin as well as jasminoidin and ursodeoxycholic acid groups and identified modules using the entropy-based clustering algorithm. The modules 11, 7, 4, 8 and 3 were identified as baicalin, jasminoidin, ursodeoxycholic acid, baicalin and jasminoidin and jasminoidin and ursodeoxycholic acid-emerged responsive modules, while 12, 8, 15, 17 and 9 were identified as disappeared responsive modules based on variation of topological similarity, respectively. No overlapping differential biological processes were enriched between baicalin and jasminoidin and jasminoidin and ursodeoxycholic acid pure emerged responsive modules, but two were enriched by their co-disappeared responsive modules including nucleotide-excision repair and epithelial structure maintenance. We found an additive effect of baicalin and jasminoidin in a divergent pattern and a synergistic effect of jasminoidin and ursodeoxycholic acid in a convergent pattern on "central hit strategy" of regulating inflammation against cerebral ischemia. The proposed module-based approach may provide us a holistic view to understand multiple pharmacological mechanisms associated with differential phenotypes from the standpoint of modular pharmacology.

  13. Algorithms for network-based identification of differential regulators from transcriptome data: a systematic evaluation

    PubMed Central

    Hui, YU; Ramkrishna, MITRA; Jing, YANG; YuanYuan, LI; ZhongMing, ZHAO

    2016-01-01

    Identification of differential regulators is critical to understand the dynamics of cellular systems and molecular mechanisms of diseases. Several computational algorithms have recently been developed for this purpose by using transcriptome and network data. However, it remains largely unclear which algorithm performs better under a specific condition. Such knowledge is important for both appropriate application and future enhancement of these algorithms. Here, we systematically evaluated seven main algorithms (TED, TDD, TFactS, RIF1, RIF2, dCSA_t2t, and dCSA_r2t), using both simulated and real datasets. In our simulation evaluation, we artificially inactivated either a single regulator or multiple regulators and examined how well each algorithm detected known gold standard regulators. We found that all these algorithms could effectively discern signals arising from regulatory network differences, indicating the validity of our simulation schema. Among the seven tested algorithms, TED and TFactS were placed first and second when both discrimination accuracy and robustness against data variation were considered. When applied to two independent lung cancer datasets, both TED and TFactS replicated a substantial fraction of their respective differential regulators. Since TED and TFactS rely on two distinct features of transcriptome data, namely differential co-expression and differential expression, both may be applied as mutual references during practical application. PMID:25326829

  14. A Multilevel Algorithm for the Solution of Second Order Elliptic Differential Equations on Sparse Grids

    NASA Technical Reports Server (NTRS)

    Pflaum, Christoph

    1996-01-01

    A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.

  15. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors

    PubMed Central

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  16. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    PubMed

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  17. [GNU Pattern: open source pattern hunter for biological sequences based on SPLASH algorithm].

    PubMed

    Xu, Ying; Li, Yi-xue; Kong, Xiang-yin

    2005-06-01

    To construct a high performance open source software engine based on IBM SPLASH algorithm for later research on pattern discovery. Gpat, which is based on SPLASH algorithm, was developed by using open source software. GNU Pattern (Gpat) software was developped, which efficiently implemented the core part of SPLASH algorithm. Full source code of Gpat was also available for other researchers to modify the program under the GNU license. Gpat is a successful implementation of SPLASH algorithm and can be used as a basic framework for later research on pattern recognition in biological sequences.

  18. Use of artificial bee colonies algorithm as numerical approximation of differential equations solution

    NASA Astrophysics Data System (ADS)

    Fikri, Fariz Fahmi; Nuraini, Nuning

    2018-03-01

    The differential equation is one of the branches in mathematics which is closely related to human life problems. Some problems that occur in our life can be modeled into differential equations as well as systems of differential equations such as the Lotka-Volterra model and SIR model. Therefore, solving a problem of differential equations is very important. Some differential equations are difficult to solve, so numerical methods are needed to solve that problems. Some numerical methods for solving differential equations that have been widely used are Euler Method, Heun Method, Runge-Kutta and others. However, some of these methods still have some restrictions that cause the method cannot be used to solve more complex problems such as an evaluation interval that we cannot change freely. New methods are needed to improve that problems. One of the method that can be used is the artificial bees colony algorithm. This algorithm is one of metaheuristic algorithm method, which can come out from local search space and do exploration in solution search space so that will get better solution than other method.

  19. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  20. An optimized digital watermarking algorithm in wavelet domain based on differential evolution for color image.

    PubMed

    Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai

    2018-01-01

    In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.

  1. Computational discovery of pathway-level genetic vulnerabilities in non-small-cell lung cancer | Office of Cancer Genomics

    Cancer.gov

    Novel approaches are needed for discovery of targeted therapies for non-small-cell lung cancer (NSCLC) that are specific to certain patients. Whole genome RNAi screening of lung cancer cell lines provides an ideal source for determining candidate drug targets. Unsupervised learning algorithms uncovered patterns of differential vulnerability across lung cancer cell lines to loss of functionally related genes. Such genetic vulnerabilities represent candidate targets for therapy and are found to be involved in splicing, translation and protein folding.

  2. A plant cell division algorithm based on cell biomechanics and ellipse-fitting.

    PubMed

    Abera, Metadel K; Verboven, Pieter; Defraeye, Thijs; Fanta, Solomon Workneh; Hertog, Maarten L A T M; Carmeliet, Jan; Nicolai, Bart M

    2014-09-01

    The importance of cell division models in cellular pattern studies has been acknowledged since the 19th century. Most of the available models developed to date are limited to symmetric cell division with isotropic growth. Often, the actual growth of the cell wall is either not considered or is updated intermittently on a separate time scale to the mechanics. This study presents a generic algorithm that accounts for both symmetrically and asymmetrically dividing cells with isotropic and anisotropic growth. Actual growth of the cell wall is simulated simultaneously with the mechanics. The cell is considered as a closed, thin-walled structure, maintained in tension by turgor pressure. The cell walls are represented as linear elastic elements that obey Hooke's law. Cell expansion is induced by turgor pressure acting on the yielding cell-wall material. A system of differential equations for the positions and velocities of the cell vertices as well as for the actual growth of the cell wall is established. Readiness to divide is determined based on cell size. An ellipse-fitting algorithm is used to determine the position and orientation of the dividing wall. The cell vertices, walls and cell connectivity are then updated and cell expansion resumes. Comparisons are made with experimental data from the literature. The generic plant cell division algorithm has been implemented successfully. It can handle both symmetrically and asymmetrically dividing cells coupled with isotropic and anisotropic growth modes. Development of the algorithm highlighted the importance of ellipse-fitting to produce randomness (biological variability) even in symmetrically dividing cells. Unlike previous models, a differential equation is formulated for the resting length of the cell wall to simulate actual biological growth and is solved simultaneously with the position and velocity of the vertices. The algorithm presented can produce different tissues varying in topological and geometrical properties. This flexibility to produce different tissue types gives the model great potential for use in investigations of plant cell division and growth in silico.

  3. Applications of multi-frequency single beam sonar fisheries analysis methods for seep quantification and characterization

    NASA Astrophysics Data System (ADS)

    Price, V.; Weber, T.; Jerram, K.; Doucet, M.

    2016-12-01

    The analysis of multi-frequency, narrow-band single-beam acoustic data for fisheries applications has long been established, with methodology focusing on characterizing targets in the water column by utilizing complex algorithms and false-color time series data to create and compare frequency response curves for dissimilar biological groups. These methods were built on concepts developed for multi-frequency analysis of satellite imagery for terrestrial analysis and have been applied to a broad range of data types and applications. Single-beam systems operating at multiple frequencies are also used for the detection and identification of seeps in water column data. Here we incorporate the same analysis and visualization techniques used for fisheries applications to attempt to characterize and quantify seeps by creating and comparing frequency response curves and applying false coloration to shallow and deep multi-channel seep data. From this information, we can establish methods to differentiate bubble size in the echogram and differentiate seep composition. These techniques are also useful in differentiating plume content from biological noise (volume reverberation) created by euphausid layers and fish with gas-filled swim bladders. The combining of the multiple frequencies using false coloring and other image analysis techniques after applying established normalization and beam pattern correction algorithms is a novel approach to quantitatively describing seeps. Further, this information could be paired with geological models, backscatter, and bathymetry data to assess seep distribution.

  4. Algorithmic framework for group analysis of differential equations and its application to generalized Zakharov-Kuznetsov equations

    NASA Astrophysics Data System (ADS)

    Huang, Ding-jiang; Ivanova, Nataliya M.

    2016-02-01

    In this paper, we explain in more details the modern treatment of the problem of group classification of (systems of) partial differential equations (PDEs) from the algorithmic point of view. More precisely, we revise the classical Lie algorithm of construction of symmetries of differential equations, describe the group classification algorithm and discuss the process of reduction of (systems of) PDEs to (systems of) equations with smaller number of independent variables in order to construct invariant solutions. The group classification algorithm and reduction process are illustrated by the example of the generalized Zakharov-Kuznetsov (GZK) equations of form ut +(F (u)) xxx +(G (u)) xyy +(H (u)) x = 0. As a result, a complete group classification of the GZK equations is performed and a number of new interesting nonlinear invariant models which have non-trivial invariance algebras are obtained. Lie symmetry reductions and exact solutions for two important invariant models, i.e., the classical and modified Zakharov-Kuznetsov equations, are constructed. The algorithmic framework for group analysis of differential equations presented in this paper can also be applied to other nonlinear PDEs.

  5. Algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations with the use of parallel computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moryakov, A. V., E-mail: sailor@orc.ru

    2016-12-15

    An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.

  6. A Modified Differential Coherent Bit Synchronization Algorithm for BeiDou Weak Signals with Large Frequency Deviation.

    PubMed

    Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi

    2017-07-04

    BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.

  7. A system for learning statistical motion patterns.

    PubMed

    Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve

    2006-09-01

    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.

  8. Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm

    PubMed Central

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565

  9. Quasi-Newton methods for parameter estimation in functional differential equations

    NASA Technical Reports Server (NTRS)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  10. Pattern Classifications Using Grover's and Ventura's Algorithms in a Two-qubits System

    NASA Astrophysics Data System (ADS)

    Singh, Manu Pratap; Radhey, Kishori; Rajput, B. S.

    2018-03-01

    Carrying out the classification of patterns in a two-qubit system by separately using Grover's and Ventura's algorithms on different possible superposition, it has been shown that the exclusion superposition and the phase-invariance superposition are the most suitable search states obtained from two-pattern start-states and one-pattern start-states, respectively, for the simultaneous classifications of patterns. The higher effectiveness of Grover's algorithm for large search states has been verified but the higher effectiveness of Ventura's algorithm for smaller data base has been contradicted in two-qubit systems and it has been demonstrated that the unknown patterns (not present in the concerned data-base) are classified more efficiently than the known ones (present in the data-base) in both the algorithms. It has also been demonstrated that different states of Singh-Rajput MES obtained from the corresponding self-single- pattern start states are the most suitable search states for the classification of patterns |00>,|01 >, |10> and |11> respectively on the second iteration of Grover's method or the first operation of Ventura's algorithm.

  11. Medulloblastoma with myogenic and/or melanotic differentiation does not align immunohistochemically with the genetically defined molecular subgroups.

    PubMed

    Gupta, Kirti; Jogunoori, Swathi; Satapathy, Ayusman; Salunke, Pravin; Kumar, Narendra; Radotra, Bishan Dass; Vasishta, Rakesh Kumar

    2018-05-01

    The World Health Organization classification of central nervous system neoplasms (2016 update) recognizes 4 histological variants and genetically defined molecular subgroups within medulloblastoma (MB). MB with myogenic differentiation is one of the rare variants, which is usually recognized as a pattern alongside the known histological variants. Because of its rarity, less is known about its molecular landscape and importantly about its placement in the current molecular schema. We aimed to analyze this rare variant for expression of 3 immunohistochemical markers conventionally used in molecular stratification of MB. Demographic profile and imaging details with survival outcome were also analyzed. Sixty-five MB cases were molecularly stratified using immunohistochemical markers (YAP1, GAB1, β-catenin). MB with myogenic differentiation and MB cases showing variable immunoreactivity with the above 3 antibodies were further evaluated for smooth muscle actin, desmin, myogenin, and HMB45. Seven cases were categorized as MB with myogenic and/or melanotic differentiation. Age ranged from 2 to 40 years with a male-to-female ratio of 1:1.3. In 4 cases, myogenic or melanotic differentiation was evident on histology, whereas in 3, differentiation was highlighted only with muscle markers. Interestingly, all 7 cases showed variable immunoreactivity with 3 molecular markers and did not follow the conventionally accepted algorithm used for molecular stratification. Follow-up period ranged from 9 to 57 months. Overall survival revealed a varied pattern, with 3 deaths and 4 patients being alive with no evidence of disease at last follow-up. Our results provide evidence that these variants are distinct and do not align immunohistochemically with the currently recognized genetic subgroups. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Trajectory data privacy protection based on differential privacy mechanism

    NASA Astrophysics Data System (ADS)

    Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong

    2018-05-01

    In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.

  13. Low dose reconstruction algorithm for differential phase contrast imaging.

    PubMed

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  14. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry.

    PubMed

    Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.

  15. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry

    PubMed Central

    Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971

  16. Identification and handling of artifactual gene expression profiles emerging in microarray hybridization experiments

    PubMed Central

    Brodsky, Leonid; Leontovich, Andrei; Shtutman, Michael; Feinstein, Elena

    2004-01-01

    Mathematical methods of analysis of microarray hybridizations deal with gene expression profiles as elementary units. However, some of these profiles do not reflect a biologically relevant transcriptional response, but rather stem from technical artifacts. Here, we describe two technically independent but rationally interconnected methods for identification of such artifactual profiles. Our diagnostics are based on detection of deviations from uniformity, which is assumed as the main underlying principle of microarray design. Method 1 is based on detection of non-uniformity of microarray distribution of printed genes that are clustered based on the similarity of their expression profiles. Method 2 is based on evaluation of the presence of gene-specific microarray spots within the slides’ areas characterized by an abnormal concentration of low/high differential expression values, which we define as ‘patterns of differentials’. Applying two novel algorithms, for nested clustering (method 1) and for pattern detection (method 2), we can make a dual estimation of the profile’s quality for almost every printed gene. Genes with artifactual profiles detected by method 1 may then be removed from further analysis. Suspicious differential expression values detected by method 2 may be either removed or weighted according to the probabilities of patterns that cover them, thus diminishing their input in any further data analysis. PMID:14999086

  17. Entropy-based divergent and convergent modular pattern reveals additive and synergistic anticerebral ischemia mechanisms

    PubMed Central

    Yu, Yanan; Zhang, Xiaoxu; Li, Bing; Zhang, Yingying; Liu, Jun; Li, Haixia; Chen, Yinying; Wang, Pengqian; Kang, Ruixia; Wu, Hongli

    2016-01-01

    Module-based network analysis of diverse pharmacological mechanisms is critical to systematically understand combination therapies and disease outcomes. We first constructed drug-target ischemic networks in baicalin, jasminoidin, ursodeoxycholic acid, and their combinations baicalin and jasminoidin as well as jasminoidin and ursodeoxycholic acid groups and identified modules using the entropy-based clustering algorithm. The modules 11, 7, 4, 8 and 3 were identified as baicalin, jasminoidin, ursodeoxycholic acid, baicalin and jasminoidin and jasminoidin and ursodeoxycholic acid-emerged responsive modules, while 12, 8, 15, 17 and 9 were identified as disappeared responsive modules based on variation of topological similarity, respectively. No overlapping differential biological processes were enriched between baicalin and jasminoidin and jasminoidin and ursodeoxycholic acid pure emerged responsive modules, but two were enriched by their co-disappeared responsive modules including nucleotide-excision repair and epithelial structure maintenance. We found an additive effect of baicalin and jasminoidin in a divergent pattern and a synergistic effect of jasminoidin and ursodeoxycholic acid in a convergent pattern on “central hit strategy” of regulating inflammation against cerebral ischemia. The proposed module-based approach may provide us a holistic view to understand multiple pharmacological mechanisms associated with differential phenotypes from the standpoint of modular pharmacology. PMID:27480252

  18. The Pandora multi-algorithm approach to automated pattern recognition in LAr TPC detectors

    NASA Astrophysics Data System (ADS)

    Marshall, J. S.; Blake, A. S. T.; Thomson, M. A.; Escudero, L.; de Vries, J.; Weston, J.; MicroBooNE Collaboration

    2017-09-01

    The development and operation of Liquid Argon Time Projection Chambers (LAr TPCs) for neutrino physics has created a need for new approaches to pattern recognition, in order to fully exploit the superb imaging capabilities offered by this technology. The Pandora Software Development Kit provides functionality to aid the process of designing, implementing and running pattern recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition: individual algorithms each address a specific task in a particular topology; a series of many tens of algorithms then carefully builds-up a picture of the event. The input to the Pandora pattern recognition is a list of 2D Hits. The output from the chain of over 70 algorithms is a hierarchy of reconstructed 3D Particles, each with an identified particle type, vertex and direction.

  19. PASSion: a pattern growth algorithm-based pipeline for splice junction detection in paired-end RNA-Seq data.

    PubMed

    Zhang, Yanju; Lameijer, Eric-Wubbo; 't Hoen, Peter A C; Ning, Zemin; Slagboom, P Eline; Ye, Kai

    2012-02-15

    RNA-seq is a powerful technology for the study of transcriptome profiles that uses deep-sequencing technologies. Moreover, it may be used for cellular phenotyping and help establishing the etiology of diseases characterized by abnormal splicing patterns. In RNA-Seq, the exact nature of splicing events is buried in the reads that span exon-exon boundaries. The accurate and efficient mapping of these reads to the reference genome is a major challenge. We developed PASSion, a pattern growth algorithm-based pipeline for splice site detection in paired-end RNA-Seq reads. Comparing the performance of PASSion to three existing RNA-Seq analysis pipelines, TopHat, MapSplice and HMMSplicer, revealed that PASSion is competitive with these packages. Moreover, the performance of PASSion is not affected by read length and coverage. It performs better than the other three approaches when detecting junctions in highly abundant transcripts. PASSion has the ability to detect junctions that do not have known splicing motifs, which cannot be found by the other tools. Of the two public RNA-Seq datasets, PASSion predicted ≈ 137,000 and 173,000 splicing events, of which on average 82 are known junctions annotated in the Ensembl transcript database and 18% are novel. In addition, our package can discover differential and shared splicing patterns among multiple samples. The code and utilities can be freely downloaded from https://trac.nbic.nl/passion and ftp://ftp.sanger.ac.uk/pub/zn1/passion.

  20. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    NASA Astrophysics Data System (ADS)

    Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2018-01-01

    The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.

  1. GPU-based Branchless Distance-Driven Projection and Backprojection

    PubMed Central

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-01-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm. PMID:29333480

  2. GPU-based Branchless Distance-Driven Projection and Backprojection.

    PubMed

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-12-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.

  3. Shape Optimization of Rubber Bushing Using Differential Evolution Algorithm

    PubMed Central

    2014-01-01

    The objective of this study is to design rubber bushing at desired level of stiffness characteristics in order to achieve the ride quality of the vehicle. A differential evolution algorithm based approach is developed to optimize the rubber bushing through integrating a finite element code running in batch mode to compute the objective function values for each generation. Two case studies were given to illustrate the application of proposed approach. Optimum shape parameters of 2D bushing model were determined by shape optimization using differential evolution algorithm. PMID:25276848

  4. Hybridization between multi-objective genetic algorithm and support vector machine for feature selection in walker-assisted gait.

    PubMed

    Martins, Maria; Costa, Lino; Frizera, Anselmo; Ceres, Ramón; Santos, Cristina

    2014-03-01

    Walker devices are often prescribed incorrectly to patients, leading to the increase of dissatisfaction and occurrence of several problems, such as, discomfort and pain. Thus, it is necessary to objectively evaluate the effects that assisted gait can have on the gait patterns of walker users, comparatively to a non-assisted gait. A gait analysis, focusing on spatiotemporal and kinematics parameters, will be issued for this purpose. However, gait analysis yields redundant information that often is difficult to interpret. This study addresses the problem of selecting the most relevant gait features required to differentiate between assisted and non-assisted gait. For that purpose, it is presented an efficient approach that combines evolutionary techniques, based on genetic algorithms, and support vector machine algorithms, to discriminate differences between assisted and non-assisted gait with a walker with forearm supports. For comparison purposes, other classification algorithms are verified. Results with healthy subjects show that the main differences are characterized by balance and joints excursion in the sagittal plane. These results, confirmed by clinical evidence, allow concluding that this technique is an efficient feature selection approach. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Uniqueness and reconstruction in magnetic resonance-electrical impedance tomography (MR-EIT).

    PubMed

    Ider, Y Ziya; Onart, Serkan; Lionheart, William R B

    2003-05-01

    Magnetic resonance-electrical impedance tomography (MR-EIT) was first proposed in 1992. Since then various reconstruction algorithms have been suggested and applied. These algorithms use peripheral voltage measurements and internal current density measurements in different combinations. In this study the problem of MR-EIT is treated as a hyperbolic system of first-order partial differential equations, and three numerical methods are proposed for its solution. This approach is not utilized in any of the algorithms proposed earlier. The numerical solution methods are integration along equipotential surfaces (method of characteristics), integration on a Cartesian grid, and inversion of a system matrix derived by a finite difference formulation. It is shown that if some uniqueness conditions are satisfied, then using at least two injected current patterns, resistivity can be reconstructed apart from a multiplicative constant. This constant can then be identified using a single voltage measurement. The methods proposed are direct, non-iterative, and valid and feasible for 3D reconstructions. They can also be used to easily obtain slice and field-of-view images from a 3D object. 2D simulations are made to illustrate the performance of the algorithms.

  6. Pulse retrieval algorithm for interferometric frequency-resolved optical gating based on differential evolution.

    PubMed

    Hyyti, Janne; Escoto, Esmerando; Steinmeyer, Günter

    2017-10-01

    A novel algorithm for the ultrashort laser pulse characterization method of interferometric frequency-resolved optical gating (iFROG) is presented. Based on a genetic method, namely, differential evolution, the algorithm can exploit all available information of an iFROG measurement to retrieve the complex electric field of a pulse. The retrieval is subjected to a series of numerical tests to prove the robustness of the algorithm against experimental artifacts and noise. These tests show that the integrated error-correction mechanisms of the iFROG method can be successfully used to remove the effect from timing errors and spectrally varying efficiency in the detection. Moreover, the accuracy and noise resilience of the new algorithm are shown to outperform retrieval based on the generalized projections algorithm, which is widely used as the standard method in FROG retrieval. The differential evolution algorithm is further validated with experimental data, measured with unamplified three-cycle pulses from a mode-locked Ti:sapphire laser. Additionally introducing group delay dispersion in the beam path, the retrieval results show excellent agreement with independent measurements with a commercial pulse measurement device based on spectral phase interferometry for direct electric-field retrieval. Further experimental tests with strongly attenuated pulses indicate resilience of differential-evolution-based retrieval against massive measurement noise.

  7. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    NASA Astrophysics Data System (ADS)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  8. Learning Behavior Characterization with Multi-Feature, Hierarchical Activity Sequences

    ERIC Educational Resources Information Center

    Ye, Cheng; Segedy, James R.; Kinnebrew, John S.; Biswas, Gautam

    2015-01-01

    This paper discusses Multi-Feature Hierarchical Sequential Pattern Mining, MFH-SPAM, a novel algorithm that efficiently extracts patterns from students' learning activity sequences. This algorithm extends an existing sequential pattern mining algorithm by dynamically selecting the level of specificity for hierarchically-defined features…

  9. Privacy-preserving heterogeneous health data sharing.

    PubMed

    Mohammed, Noman; Jiang, Xiaoqian; Chen, Rui; Fung, Benjamin C M; Ohno-Machado, Lucila

    2013-05-01

    Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among existing privacy models, ε-differential privacy provides one of the strongest privacy guarantees and makes no assumptions about an adversary's background knowledge. All existing solutions that ensure ε-differential privacy handle the problem of disclosing relational and set-valued data in a privacy-preserving manner separately. In this paper, we propose an algorithm that considers both relational and set-valued data in differentially private disclosure of healthcare data. The proposed approach makes a simple yet fundamental switch in differentially private algorithm design: instead of listing all possible records (ie, a contingency table) for noise addition, records are generalized before noise addition. The algorithm first generalizes the raw data in a probabilistic way, and then adds noise to guarantee ε-differential privacy. We showed that the disclosed data could be used effectively to build a decision tree induction classifier. Experimental results demonstrated that the proposed algorithm is scalable and performs better than existing solutions for classification analysis. The resulting utility may degrade when the output domain size is very large, making it potentially inappropriate to generate synthetic data for large health databases. Unlike existing techniques, the proposed algorithm allows the disclosure of health data containing both relational and set-valued data in a differentially private manner, and can retain essential information for discriminative analysis.

  10. Privacy-preserving heterogeneous health data sharing

    PubMed Central

    Mohammed, Noman; Jiang, Xiaoqian; Chen, Rui; Fung, Benjamin C M; Ohno-Machado, Lucila

    2013-01-01

    Objective Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among existing privacy models, ε-differential privacy provides one of the strongest privacy guarantees and makes no assumptions about an adversary's background knowledge. All existing solutions that ensure ε-differential privacy handle the problem of disclosing relational and set-valued data in a privacy-preserving manner separately. In this paper, we propose an algorithm that considers both relational and set-valued data in differentially private disclosure of healthcare data. Methods The proposed approach makes a simple yet fundamental switch in differentially private algorithm design: instead of listing all possible records (ie, a contingency table) for noise addition, records are generalized before noise addition. The algorithm first generalizes the raw data in a probabilistic way, and then adds noise to guarantee ε-differential privacy. Results We showed that the disclosed data could be used effectively to build a decision tree induction classifier. Experimental results demonstrated that the proposed algorithm is scalable and performs better than existing solutions for classification analysis. Limitation The resulting utility may degrade when the output domain size is very large, making it potentially inappropriate to generate synthetic data for large health databases. Conclusions Unlike existing techniques, the proposed algorithm allows the disclosure of health data containing both relational and set-valued data in a differentially private manner, and can retain essential information for discriminative analysis. PMID:23242630

  11. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  12. Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.

    PubMed

    Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq

    2016-01-01

    This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.

  13. Multi-Level Sequential Pattern Mining Based on Prime Encoding

    NASA Astrophysics Data System (ADS)

    Lianglei, Sun; Yun, Li; Jiang, Yin

    Encoding is not only to express the hierarchical relationship, but also to facilitate the identification of the relationship between different levels, which will directly affect the efficiency of the algorithm in the area of mining the multi-level sequential pattern. In this paper, we prove that one step of division operation can decide the parent-child relationship between different levels by using prime encoding and present PMSM algorithm and CROSS-PMSM algorithm which are based on prime encoding for mining multi-level sequential pattern and cross-level sequential pattern respectively. Experimental results show that the algorithm can effectively extract multi-level and cross-level sequential pattern from the sequence database.

  14. The detection of T-wave variation linked to arrhythmic risk: an industry perspective.

    PubMed

    Xue, Joel; Rowlandson, Ian

    2013-01-01

    Although the scientific literature contains ample descriptions of peculiar patterns of repolarization linked to arrhythmic risk, the objective quantification and classification of these patterns continues to be a challenge that impacts their widespread adoption in clinical practice. To advance the science, computerized algorithms spawned in the academic environment have been essential in order to find, extract and measure these patterns. However, outside the strict control of a core lab, these algorithms are exposed to poor quality signals and need to be effective in the presence of different forms of noise that can either obscure or mimic the T-wave variation (TWV) of interest. To provide a practical solution that can be verified and validated for the market, important tradeoffs need to be made that are based on an intimate understanding of the end-user as well as the key characteristics of either the signal or the noise that can be used by the signal processing engineer to best differentiate them. To illustrate this, two contemporary medical devices used for quantifying T-wave variation are presented, including the modified moving average (MMA) for the detection of T-wave Alternans (TWA) and the quantification of T-wave shape as inputs to the Morphology Combination Score (MCS) for the trending of drug-induced repolarization abnormalities. © 2013 Elsevier Inc. All rights reserved.

  15. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  16. A Node Linkage Approach for Sequential Pattern Mining

    PubMed Central

    Navarro, Osvaldo; Cumplido, René; Villaseñor-Pineda, Luis; Feregrino-Uribe, Claudia; Carrasco-Ochoa, Jesús Ariel

    2014-01-01

    Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT), has better performance and scalability in comparison with state of the art algorithms. PMID:24933123

  17. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; An, R.

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less

  18. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    DOE PAGES

    Acciarri, R.; Adams, C.; An, R.; ...

    2018-01-29

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less

  19. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    NASA Astrophysics Data System (ADS)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  20. Fringe pattern demodulation with a two-dimensional digital phase-locked loop algorithm.

    PubMed

    Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-09-10

    A novel technique called a two-dimensional digital phase-locked loop (DPLL) for fringe pattern demodulation is presented. This algorithm is more suitable for demodulation of fringe patterns with varying phase in two directions than the existing DPLL techniques that assume that the phase of the fringe patterns varies only in one direction. The two-dimensional DPLL technique assumes that the phase of a fringe pattern is continuous in both directions and takes advantage of the phase continuity; consequently, the algorithm has better noise performance than the existing DPLL schemes. The two-dimensional DPLL algorithm is also suitable for demodulation of fringe patterns with low sampling rates, and it outperforms the Fourier fringe analysis technique in this aspect.

  1. Context-Sensitive Grammar Transform: Compression and Pattern Matching

    NASA Astrophysics Data System (ADS)

    Maruyama, Shirou; Tanaka, Youhei; Sakamoto, Hiroshi; Takeda, Masayuki

    A framework of context-sensitive grammar transform for speeding-up compressed pattern matching (CPM) is proposed. A greedy compression algorithm with the transform model is presented as well as a Knuth-Morris-Pratt (KMP)-type compressed pattern matching algorithm. The compression ratio is a match for gzip and Re-Pair, and the search speed of our CPM algorithm is almost twice faster than the KMP-type CPM algorithm on Byte-Pair-Encoding by Shibata et al.[18], and in the case of short patterns, faster than the Boyer-Moore-Horspool algorithm with the stopper encoding by Rautio et al.[14], which is regarded as one of the best combinations that allows a practically fast search.

  2. Labeling Residential Community Characteristics from Collective Activity Patterns Using Taxi Trip Data

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Fang, Z.

    2017-09-01

    There existing a significant social and spatial differentiation in the residential communities in urban city. People live in different places have different socioeconomic background, resulting in various geographically activity patterns. This paper aims to label the characteristics of residential communities in a city using collective activity patterns derived from taxi trip data. Specifically, we first present a method to allocate the O/D (Origin/Destination) points of taxi trips to the land use parcels where the activities taken place in. Then several indices are employed to describe the collective activity patterns, including both activity intensity, travel distance, travel time, and activity space of residents by taking account of the geographical distribution of all O/Ds of the taxi trip related to that residential community. Followed by that, an agglomerative hierarchical clustering algorithm is introduced to cluster the residential communities with similar activity patterns. In the case study of Wuhan, the residential communities are clearly divided into eight clusters, which could be labelled as ordinary communities, privileged communities, old isolated communities, suburban communities, and so on. In this paper, we provide a new perspective to label the land use under same type from people's mobility patterns with the support of big trajectory data.

  3. PSEMA: An Algorithm for Pattern Stimulated Evolution of Music

    NASA Astrophysics Data System (ADS)

    Mavrogianni, A. N.; Vlachos, D. S.; Harvalias, G.

    2008-11-01

    An algorithm for pattern stimulating evolution of music is presented in this work (PSEMA). The system combines a pattern with a genetic algorithm for automatic music composition in order to create a musical phrase uniquely characterizing the pattern. As an example a musical portrait is presented. The initialization of the musical phrases is done with a Markov Chain process. The evolution is dominated by an arbitrary correspondence between the pattern (feature extraction of the pattern may be used in this step) and the esthetic result of the musical phrase.

  4. Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel

    2011-09-01

    The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.

  5. A Self Adaptive Differential Evolution Algorithm for Global Optimization

    NASA Astrophysics Data System (ADS)

    Kumar, Pravesh; Pant, Millie

    This paper presents a new Differential Evolution algorithm based on hybridization of adaptive control parameters and trigonometric mutation. First we propose a self adaptive DE named ADE where choice of control parameter F and Cr is not fixed at some constant value but is taken iteratively. The proposed algorithm is further modified by applying trigonometric mutation in it and the corresponding algorithm is named as ATDE. The performance of ATDE is evaluated on the set of 8 benchmark functions and the results are compared with the classical DE algorithm in terms of average fitness function value, number of function evaluations, convergence time and success rate. The numerical result shows the competence of the proposed algorithm.

  6. New algorithms for solving high even-order differential equations using third and fourth Chebyshev-Galerkin methods

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Abd-Elhameed, W. M.; Bassuony, M. A.

    2013-03-01

    This paper is concerned with spectral Galerkin algorithms for solving high even-order two point boundary value problems in one dimension subject to homogeneous and nonhomogeneous boundary conditions. The proposed algorithms are extended to solve two-dimensional high even-order differential equations. The key to the efficiency of these algorithms is to construct compact combinations of Chebyshev polynomials of the third and fourth kinds as basis functions. The algorithms lead to linear systems with specially structured matrices that can be efficiently inverted. Numerical examples are included to demonstrate the validity and applicability of the proposed algorithms, and some comparisons with some other methods are made.

  7. Driving style recognition method using braking characteristics based on hidden Markov model

    PubMed Central

    Wu, Chaozhong; Lyu, Nengchao; Huang, Zhen

    2017-01-01

    Since the advantage of hidden Markov model in dealing with time series data and for the sake of identifying driving style, three driving style (aggressive, moderate and mild) are modeled reasonably through hidden Markov model based on driver braking characteristics to achieve efficient driving style. Firstly, braking impulse and the maximum braking unit area of vacuum booster within a certain time are collected from braking operation, and then general braking and emergency braking characteristics are extracted to code the braking characteristics. Secondly, the braking behavior observation sequence is used to describe the initial parameters of hidden Markov model, and the generation of the hidden Markov model for differentiating and an observation sequence which is trained and judged by the driving style is introduced. Thirdly, the maximum likelihood logarithm could be implied from the observable parameters. The recognition accuracy of algorithm is verified through experiments and two common pattern recognition algorithms. The results showed that the driving style discrimination based on hidden Markov model algorithm could realize effective discriminant of driving style. PMID:28837580

  8. Fringe pattern demodulation with a two-frame digital phase-locked loop algorithm.

    PubMed

    Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-09-10

    A novel technique called a two-frame digital phase-locked loop for fringe pattern demodulation is presented. In this scheme, two fringe patterns with different spatial carrier frequencies are grabbed for an object. A digital phase-locked loop algorithm tracks and demodulates the phase difference between both fringe patterns by employing the wrapped phase components of one of the fringe patterns as a reference to demodulate the second fringe pattern. The desired phase information can be extracted from the demodulated phase difference. We tested the algorithm experimentally using real fringe patterns. The technique is shown to be suitable for noncontact measurement of objects with rapid surface variations, and it outperforms the Fourier fringe analysis technique in this aspect. Phase maps produced withthis algorithm are noisy in comparison with phase maps generated with the Fourier fringe analysis technique.

  9. Three-dimensional volume containing multiple two-dimensional information patterns

    NASA Astrophysics Data System (ADS)

    Nakayama, Hirotaka; Shiraki, Atsushi; Hirayama, Ryuji; Masuda, Nobuyuki; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2013-06-01

    We have developed an algorithm for recording multiple gradated two-dimensional projection patterns in a single three-dimensional object. When a single pattern is observed, information from the other patterns can be treated as background noise. The proposed algorithm has two important features: the number of patterns that can be recorded is theoretically infinite and no meaningful information can be seen outside of the projection directions. We confirmed the effectiveness of the proposed algorithm by performing numerical simulations of two laser crystals: an octagonal prism that contained four patterns in four projection directions and a dodecahedron that contained six patterns in six directions. We also fabricated and demonstrated an actual prototype laser crystal from a glass cube engraved by a laser beam. This algorithm has applications in various fields, including media art, digital signage, and encryption technology.

  10. Design Document for Differential GPS Ground Reference Station Pseudorange Correction Generation Algorithm

    DOT National Transportation Integrated Search

    1986-12-01

    The algorithms described in this report determine the differential corrections to be broadcast to users of the Global Positioning System (GPS) who require higher accuracy navigation or position information than the 30 to 100 meters that GPS normally ...

  11. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  12. A differential operator realisation approach for constructing Casimir operators of non-semisimple Lie algebras

    NASA Astrophysics Data System (ADS)

    Alshammari, Fahad; Isaac, Phillip S.; Marquette, Ian

    2018-02-01

    We introduce a search algorithm that utilises differential operator realisations to find polynomial Casimir operators of Lie algebras. To demonstrate the algorithm, we look at two classes of examples: (1) the model filiform Lie algebras and (2) the Schrödinger Lie algebras. We find that an abstract form of dimensional analysis assists us in our algorithm, and greatly reduces the complexity of the problem.

  13. Differentially Private Frequent Sequence Mining via Sampling-based Candidate Pruning

    PubMed Central

    Xu, Shengzhi; Cheng, Xiang; Li, Zhengyi; Xiong, Li

    2016-01-01

    In this paper, we study the problem of mining frequent sequences under the rigorous differential privacy model. We explore the possibility of designing a differentially private frequent sequence mining (FSM) algorithm which can achieve both high data utility and a high degree of privacy. We found, in differentially private FSM, the amount of required noise is proportionate to the number of candidate sequences. If we could effectively reduce the number of unpromising candidate sequences, the utility and privacy tradeoff can be significantly improved. To this end, by leveraging a sampling-based candidate pruning technique, we propose a novel differentially private FSM algorithm, which is referred to as PFS2. The core of our algorithm is to utilize sample databases to further prune the candidate sequences generated based on the downward closure property. In particular, we use the noisy local support of candidate sequences in the sample databases to estimate which sequences are potentially frequent. To improve the accuracy of such private estimations, a sequence shrinking method is proposed to enforce the length constraint on the sample databases. Moreover, to decrease the probability of misestimating frequent sequences as infrequent, a threshold relaxation method is proposed to relax the user-specified threshold for the sample databases. Through formal privacy analysis, we show that our PFS2 algorithm is ε-differentially private. Extensive experiments on real datasets illustrate that our PFS2 algorithm can privately find frequent sequences with high accuracy. PMID:26973430

  14. Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time

    NASA Astrophysics Data System (ADS)

    Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.

    2018-03-01

    A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.

  15. A Comparative Study of Frequent and Maximal Periodic Pattern Mining Algorithms in Spatiotemporal Databases

    NASA Astrophysics Data System (ADS)

    Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.

    2017-08-01

    Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.

  16. Differentiation Between Organic and Non-Organic Apples Using Diffraction Grating and Image Processing-A Cost-Effective Approach.

    PubMed

    Jiang, Nanfeng; Song, Weiran; Wang, Hui; Guo, Gongde; Liu, Yuanyuan

    2018-05-23

    As the expectation for higher quality of life increases, consumers have higher demands for quality food. Food authentication is the technical means of ensuring food is what it says it is. A popular approach to food authentication is based on spectroscopy, which has been widely used for identifying and quantifying the chemical components of an object. This approach is non-destructive and effective but expensive. This paper presents a computer vision-based sensor system for food authentication, i.e., differentiating organic from non-organic apples. This sensor system consists of low-cost hardware and pattern recognition software. We use a flashlight to illuminate apples and capture their images through a diffraction grating. These diffraction images are then converted into a data matrix for classification by pattern recognition algorithms, including k -nearest neighbors ( k -NN), support vector machine (SVM) and three partial least squares discriminant analysis (PLS-DA)- based methods. We carry out experiments on a reasonable collection of apple samples and employ a proper pre-processing, resulting in a highest classification accuracy of 94%. Our studies conclude that this sensor system has the potential to provide a viable solution to empower consumers in food authentication.

  17. Algorithm for Overcoming the Curse of Dimensionality for Certain Non-convex Hamilton-Jacobi Equations, Projections and Differential Games

    DTIC Science & Technology

    2016-05-01

    Algorithm for Overcoming the Curse of Dimensionality for Certain Non-convex Hamilton-Jacobi Equations, Projections and Differential Games Yat Tin...subproblems. Our approach is expected to have wide applications in continuous dynamic games , control theory problems, and elsewhere. Mathematics...differential dynamic games , control theory problems, and dynamical systems coming from the physical world, e.g. [11]. An important application is to

  18. Single-step methods for predicting orbital motion considering its periodic components

    NASA Astrophysics Data System (ADS)

    Lavrov, K. N.

    1989-01-01

    Modern numerical methods for integration of ordinary differential equations can provide accurate and universal solutions to celestial mechanics problems. The implicit single sequence algorithms of Everhart and multiple step computational schemes using a priori information on periodic components can be combined to construct implicit single sequence algorithms which combine their advantages. The construction and analysis of the properties of such algorithms are studied, utilizing trigonometric approximation of the solutions of differential equations containing periodic components. The algorithms require 10 percent more machine memory than the Everhart algorithms, but are twice as fast, and yield short term predictions valid for five to ten orbits with good accuracy and five to six times faster than algorithms using other methods.

  19. A plant cell division algorithm based on cell biomechanics and ellipse-fitting

    PubMed Central

    Abera, Metadel K.; Verboven, Pieter; Defraeye, Thijs; Fanta, Solomon Workneh; Hertog, Maarten L. A. T. M.; Carmeliet, Jan; Nicolai, Bart M.

    2014-01-01

    Background and Aims The importance of cell division models in cellular pattern studies has been acknowledged since the 19th century. Most of the available models developed to date are limited to symmetric cell division with isotropic growth. Often, the actual growth of the cell wall is either not considered or is updated intermittently on a separate time scale to the mechanics. This study presents a generic algorithm that accounts for both symmetrically and asymmetrically dividing cells with isotropic and anisotropic growth. Actual growth of the cell wall is simulated simultaneously with the mechanics. Methods The cell is considered as a closed, thin-walled structure, maintained in tension by turgor pressure. The cell walls are represented as linear elastic elements that obey Hooke's law. Cell expansion is induced by turgor pressure acting on the yielding cell-wall material. A system of differential equations for the positions and velocities of the cell vertices as well as for the actual growth of the cell wall is established. Readiness to divide is determined based on cell size. An ellipse-fitting algorithm is used to determine the position and orientation of the dividing wall. The cell vertices, walls and cell connectivity are then updated and cell expansion resumes. Comparisons are made with experimental data from the literature. Key Results The generic plant cell division algorithm has been implemented successfully. It can handle both symmetrically and asymmetrically dividing cells coupled with isotropic and anisotropic growth modes. Development of the algorithm highlighted the importance of ellipse-fitting to produce randomness (biological variability) even in symmetrically dividing cells. Unlike previous models, a differential equation is formulated for the resting length of the cell wall to simulate actual biological growth and is solved simultaneously with the position and velocity of the vertices. Conclusions The algorithm presented can produce different tissues varying in topological and geometrical properties. This flexibility to produce different tissue types gives the model great potential for use in investigations of plant cell division and growth in silico. PMID:24863687

  20. Computer-aided US diagnosis of breast lesions by using cell-based contour grouping.

    PubMed

    Cheng, Jie-Zhi; Chou, Yi-Hong; Huang, Chiun-Sheng; Chang, Yeun-Chung; Tiu, Chui-Mei; Chen, Kuei-Wu; Chen, Chung-Ming

    2010-06-01

    To develop a computer-aided diagnostic algorithm with automatic boundary delineation for differential diagnosis of benign and malignant breast lesions at ultrasonography (US) and investigate the effect of boundary quality on the performance of a computer-aided diagnostic algorithm. This was an institutional review board-approved retrospective study with waiver of informed consent. A cell-based contour grouping (CBCG) segmentation algorithm was used to delineate the lesion boundaries automatically. Seven morphologic features were extracted. The classifier was a logistic regression function. Five hundred twenty breast US scans were obtained from 520 subjects (age range, 15-89 years), including 275 benign (mean size, 15 mm; range, 5-35 mm) and 245 malignant (mean size, 18 mm; range, 8-29 mm) lesions. The newly developed computer-aided diagnostic algorithm was evaluated on the basis of boundary quality and differentiation performance. The segmentation algorithms and features in two conventional computer-aided diagnostic algorithms were used for comparative study. The CBCG-generated boundaries were shown to be comparable with the manually delineated boundaries. The area under the receiver operating characteristic curve (AUC) and differentiation accuracy were 0.968 +/- 0.010 and 93.1% +/- 0.7, respectively, for all 520 breast lesions. At the 5% significance level, the newly developed algorithm was shown to be superior to the use of the boundaries and features of the two conventional computer-aided diagnostic algorithms in terms of AUC (0.974 +/- 0.007 versus 0.890 +/- 0.008 and 0.788 +/- 0.024, respectively). The newly developed computer-aided diagnostic algorithm that used a CBCG segmentation method to measure boundaries achieved a high differentiation performance. Copyright RSNA, 2010

  1. Research on parallel algorithm for sequential pattern mining

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  2. Enlightening discriminative network functional modules behind Principal Component Analysis separation in differential-omic science studies

    PubMed Central

    Ciucci, Sara; Ge, Yan; Durán, Claudio; Palladini, Alessandra; Jiménez-Jiménez, Víctor; Martínez-Sánchez, Luisa María; Wang, Yuting; Sales, Susanne; Shevchenko, Andrej; Poser, Steven W.; Herbig, Maik; Otto, Oliver; Androutsellis-Theotokis, Andreas; Guck, Jochen; Gerl, Mathias J.; Cannistraci, Carlo Vittorio

    2017-01-01

    Omic science is rapidly growing and one of the most employed techniques to explore differential patterns in omic datasets is principal component analysis (PCA). However, a method to enlighten the network of omic features that mostly contribute to the sample separation obtained by PCA is missing. An alternative is to build correlation networks between univariately-selected significant omic features, but this neglects the multivariate unsupervised feature compression responsible for the PCA sample segregation. Biologists and medical researchers often prefer effective methods that offer an immediate interpretation to complicated algorithms that in principle promise an improvement but in practice are difficult to be applied and interpreted. Here we present PC-corr: a simple algorithm that associates to any PCA segregation a discriminative network of features. Such network can be inspected in search of functional modules useful in the definition of combinatorial and multiscale biomarkers from multifaceted omic data in systems and precision biomedicine. We offer proofs of PC-corr efficacy on lipidomic, metagenomic, developmental genomic, population genetic, cancer promoteromic and cancer stem-cell mechanomic data. Finally, PC-corr is a general functional network inference approach that can be easily adopted for big data exploration in computer science and analysis of complex systems in physics. PMID:28287094

  3. Algorithms for Hidden Markov Models Restricted to Occurrences of Regular Expressions

    PubMed Central

    Tataru, Paula; Sand, Andreas; Hobolth, Asger; Mailund, Thomas; Pedersen, Christian N. S.

    2013-01-01

    Hidden Markov Models (HMMs) are widely used probabilistic models, particularly for annotating sequential data with an underlying hidden structure. Patterns in the annotation are often more relevant to study than the hidden structure itself. A typical HMM analysis consists of annotating the observed data using a decoding algorithm and analyzing the annotation to study patterns of interest. For example, given an HMM modeling genes in DNA sequences, the focus is on occurrences of genes in the annotation. In this paper, we define a pattern through a regular expression and present a restriction of three classical algorithms to take the number of occurrences of the pattern in the hidden sequence into account. We present a new algorithm to compute the distribution of the number of pattern occurrences, and we extend the two most widely used existing decoding algorithms to employ information from this distribution. We show experimentally that the expectation of the distribution of the number of pattern occurrences gives a highly accurate estimate, while the typical procedure can be biased in the sense that the identified number of pattern occurrences does not correspond to the true number. We furthermore show that using this distribution in the decoding algorithms improves the predictive power of the model. PMID:24833225

  4. Depth measurements through controlled aberrations of projected patterns.

    PubMed

    Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim

    2012-03-12

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  5. A feature-preserving hair removal algorithm for dermoscopy images.

    PubMed

    Abbas, Qaisar; Garcia, Irene Fondón; Emre Celebi, M; Ahmad, Waqar

    2013-02-01

    Accurate segmentation and repair of hair-occluded information from dermoscopy images are challenging tasks for computer-aided detection (CAD) of melanoma. Currently, many hair-restoration algorithms have been developed, but most of these fail to identify hairs accurately and their removal technique is slow and disturbs the lesion's pattern. In this article, a novel hair-restoration algorithm is presented, which has a capability to preserve the skin lesion features such as color and texture and able to segment both dark and light hairs. Our algorithm is based on three major steps: the rough hairs are segmented using a matched filtering with first derivative of gaussian (MF-FDOG) with thresholding that generate strong responses for both dark and light hairs, refinement of hairs by morphological edge-based techniques, which are repaired through a fast marching inpainting method. Diagnostic accuracy (DA) and texture-quality measure (TQM) metrics are utilized based on dermatologist-drawn manual hair masks that were used as a ground truth to evaluate the performance of the system. The hair-restoration algorithm is tested on 100 dermoscopy images. The comparisons have been done among (i) linear interpolation, inpainting by (ii) non-linear partial differential equation (PDE), and (iii) exemplar-based repairing techniques. Among different hair detection and removal techniques, our proposed algorithm obtained the highest value of DA: 93.3% and TQM: 90%. The experimental results indicate that the proposed algorithm is highly accurate, robust and able to restore hair pixels without damaging the lesion texture. This method is fully automatic and can be easily integrated into a CAD system. © 2011 John Wiley & Sons A/S.

  6. An Indoor Pedestrian Positioning Method Using HMM with a Fuzzy Pattern Recognition Algorithm in a WLAN Fingerprint System

    PubMed Central

    Ni, Yepeng; Liu, Jianbo; Liu, Shan; Bai, Yaxin

    2016-01-01

    With the rapid development of smartphones and wireless networks, indoor location-based services have become more and more prevalent. Due to the sophisticated propagation of radio signals, the Received Signal Strength Indicator (RSSI) shows a significant variation during pedestrian walking, which introduces critical errors in deterministic indoor positioning. To solve this problem, we present a novel method to improve the indoor pedestrian positioning accuracy by embedding a fuzzy pattern recognition algorithm into a Hidden Markov Model. The fuzzy pattern recognition algorithm follows the rule that the RSSI fading has a positive correlation to the distance between the measuring point and the AP location even during a dynamic positioning measurement. Through this algorithm, we use the RSSI variation trend to replace the specific RSSI value to achieve a fuzzy positioning. The transition probability of the Hidden Markov Model is trained by the fuzzy pattern recognition algorithm with pedestrian trajectories. Using the Viterbi algorithm with the trained model, we can obtain a set of hidden location states. In our experiments, we demonstrate that, compared with the deterministic pattern matching algorithm, our method can greatly improve the positioning accuracy and shows robust environmental adaptability. PMID:27618053

  7. Apriori Versions Based on MapReduce for Mining Frequent Patterns on Big Data.

    PubMed

    Luna, Jose Maria; Padillo, Francisco; Pechenizkiy, Mykola; Ventura, Sebastian

    2017-09-27

    Pattern mining is one of the most important tasks to extract meaningful and useful information from raw data. This task aims to extract item-sets that represent any type of homogeneity and regularity in data. Although many efficient algorithms have been developed in this regard, the growing interest in data has caused the performance of existing pattern mining techniques to be dropped. The goal of this paper is to propose new efficient pattern mining algorithms to work in big data. To this aim, a series of algorithms based on the MapReduce framework and the Hadoop open-source implementation have been proposed. The proposed algorithms can be divided into three main groups. First, two algorithms [Apriori MapReduce (AprioriMR) and iterative AprioriMR] with no pruning strategy are proposed, which extract any existing item-set in data. Second, two algorithms (space pruning AprioriMR and top AprioriMR) that prune the search space by means of the well-known anti-monotone property are proposed. Finally, a last algorithm (maximal AprioriMR) is also proposed for mining condensed representations of frequent patterns. To test the performance of the proposed algorithms, a varied collection of big data datasets have been considered, comprising up to 3 · 10#x00B9;⁸ transactions and more than 5 million of distinct single-items. The experimental stage includes comparisons against highly efficient and well-known pattern mining algorithms. Results reveal the interest of applying MapReduce versions when complex problems are considered, and also the unsuitability of this paradigm when dealing with small data.

  8. Real-time polarization imaging algorithm for camera-based polarization navigation sensors.

    PubMed

    Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli

    2017-04-10

    Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.

  9. Performance of the 2015 International Task Force Consensus Statement Risk Stratification Algorithm for Implantable Cardioverter-Defibrillator Placement in Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy.

    PubMed

    Orgeron, Gabriela M; Te Riele, Anneline; Tichnell, Crystal; Wang, Weijia; Murray, Brittney; Bhonsale, Aditya; Judge, Daniel P; Kamel, Ihab R; Zimmerman, Stephan L; Tandri, Harikrishna; Calkins, Hugh; James, Cynthia A

    2018-02-01

    Ventricular arrhythmias are a feared complication of arrhythmogenic right ventricular dysplasia/cardiomyopathy. In 2015, an International Task Force Consensus Statement proposed a risk stratification algorithm for implantable cardioverter-defibrillator placement in arrhythmogenic right ventricular dysplasia/cardiomyopathy. To evaluate performance of the algorithm, 365 arrhythmogenic right ventricular dysplasia/cardiomyopathy patients were classified as having a Class I, IIa, IIb, or III indication per the algorithm at baseline. Survival free from sustained ventricular arrhythmia (VT/VF) in follow-up was the primary outcome. Incidence of ventricular fibrillation/flutter cycle length <240 ms was also assessed. Two hundred twenty-four (61%) patients had a Class I implantable cardioverter-defibrillator indication; 80 (22%), Class IIa; 54 (15%), Class IIb; and 7 (2%), Class III. During a median 4.2 (interquartile range, 1.7-8.4)-year follow-up, 190 (52%) patients had VT/VF and 60 (16%) had ventricular fibrillation/flutter. Although the algorithm appropriately differentiated risk of VT/VF, incidence of VT/VF was underestimated (observed versus expected: 29.6 [95% confidence interval, 25.2-34.0] versus >10%/year Class I; 15.5 [confidence interval 11.1-21.6] versus 1% to 10%/year Class IIa). In addition, the algorithm did not differentiate survival free from ventricular fibrillation/flutter between Class I and IIa patients ( P =0.97) or for VT/VF in Class I and IIa primary prevention patients ( P =0.22). Adding Holter results (<1000 premature ventricular contractions/24 hours) to International Task Force Consensus classification differentiated risks. While the algorithm differentiates arrhythmic risk well overall, it did not distinguish ventricular fibrillation/flutter risks of patients with Class I and IIa implantable cardioverter-defibrillator indications. Limited differentiation was seen for primary prevention cases. As these are vital uncertainties in clinical decision-making, refinements to the algorithm are suggested prior to implementation. © 2018 American Heart Association, Inc.

  10. Use of a machine learning algorithm to classify expertise: analysis of hand motion patterns during a simulated surgical task.

    PubMed

    Watson, Robert A

    2014-08-01

    To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, SB; Cady, ST; Dominguez-Garcia, AD

    This paper presents the theory and implementation of a distributed algorithm for controlling differential power processing converters in photovoltaic (PV) applications. This distributed algorithm achieves true maximum power point tracking of series-connected PV submodules by relying only on local voltage measurements and neighbor-to-neighbor communication between the differential power converters. Compared to previous solutions, the proposed algorithm achieves reduced number of perturbations at each step and potentially faster tracking without adding extra hardware; all these features make this algorithm well-suited for long submodule strings. The formulation of the algorithm, discussion of its properties, as well as three case studies are presented.more » The performance of the distributed tracking algorithm has been verified via experiments, which yielded quantifiable improvements over other techniques that have been implemented in practice. Both simulations and hardware experiments have confirmed the effectiveness of the proposed distributed algorithm.« less

  12. A Cognitive Machine Learning Algorithm for Cardiac Imaging: A Pilot Study for Differentiating Constrictive Pericarditis from Restrictive Cardiomyopathy

    PubMed Central

    Sengupta, Partho P.; Huang, Yen-Min; Bansal, Manish; Ashrafi, Ali; Fisher, Matt; Shameer, Khader; Gall, Walt; Dudley, Joel T

    2016-01-01

    Background Associating a patient’s profile with the memories of prototypical patients built through previous repeat clinical experience is a key process in clinical judgment. We hypothesized that a similar process using a cognitive computing tool would be well suited for learning and recalling multidimensional attributes of speckle tracking echocardiography (STE) data sets derived from patients with known constrictive pericarditis (CP) and restrictive cardiomyopathy (RCM). Methods and Results Clinical and echocardiographic data of 50 patients with CP and 44 with RCM were used for developing an associative memory classifier (AMC) based machine learning algorithm. The STE data was normalized in reference to 47 controls with no structural heart disease, and the diagnostic area under the receiver operating characteristic curve (AUC) of the AMC was evaluated for differentiating CP from RCM. Using only STE variables, AMC achieved a diagnostic AUC of 89·2%, which improved to 96·2% with addition of 4 echocardiographic variables. In comparison, the AUC of early diastolic mitral annular velocity and left ventricular longitudinal strain were 82.1% and 63·7%, respectively. Furthermore, AMC demonstrated greater accuracy and shorter learning curves than other machine learning approaches with accuracy asymptotically approaching 90% after a training fraction of 0·3 and remaining flat at higher training fractions. Conclusions This study demonstrates feasibility of a cognitive machine learning approach for learning and recalling patterns observed during echocardiographic evaluations. Incorporation of machine learning algorithms in cardiac imaging may aid standardized assessments and support the quality of interpretations, particularly for novice readers with limited experience. PMID:27266599

  13. Differential Diagnosis of Erythmato-Squamous Diseases Using Classification and Regression Tree.

    PubMed

    Maghooli, Keivan; Langarizadeh, Mostafa; Shahmoradi, Leila; Habibi-Koolaee, Mahdi; Jebraeily, Mohamad; Bouraghi, Hamid

    2016-10-01

    Differential diagnosis of Erythmato-Squamous Diseases (ESD) is a major challenge in the field of dermatology. The ESD diseases are placed into six different classes. Data mining is the process for detection of hidden patterns. In the case of ESD, data mining help us to predict the diseases. Different algorithms were developed for this purpose. we aimed to use the Classification and Regression Tree (CART) to predict differential diagnosis of ESD. we used the Cross Industry Standard Process for Data Mining (CRISP-DM) methodology. For this purpose, the dermatology data set from machine learning repository, UCI was obtained. The Clementine 12.0 software from IBM Company was used for modelling. In order to evaluation of the model we calculate the accuracy, sensitivity and specificity of the model. The proposed model had an accuracy of 94.84% (. 24.42) in order to correct prediction of the ESD disease. Results indicated that using of this classifier could be useful. But, it would be strongly recommended that the combination of machine learning methods could be more useful in terms of prediction of ESD.

  14. An efficient method to identify differentially expressed genes in microarray experiments

    PubMed Central

    Qin, Huaizhen; Feng, Tao; Harding, Scott A.; Tsai, Chung-Jui; Zhang, Shuanglin

    2013-01-01

    Motivation Microarray experiments typically analyze thousands to tens of thousands of genes from small numbers of biological replicates. The fact that genes are normally expressed in functionally relevant patterns suggests that gene-expression data can be stratified and clustered into relatively homogenous groups. Cluster-wise dimensionality reduction should make it feasible to improve screening power while minimizing information loss. Results We propose a powerful and computationally simple method for finding differentially expressed genes in small microarray experiments. The method incorporates a novel stratification-based tight clustering algorithm, principal component analysis and information pooling. Comprehensive simulations show that our method is substantially more powerful than the popular SAM and eBayes approaches. We applied the method to three real microarray datasets: one from a Populus nitrogen stress experiment with 3 biological replicates; and two from public microarray datasets of human cancers with 10 to 40 biological replicates. In all three analyses, our method proved more robust than the popular alternatives for identification of differentially expressed genes. Availability The C++ code to implement the proposed method is available upon request for academic use. PMID:18453554

  15. A Novel Hybrid Firefly Algorithm for Global Optimization.

    PubMed

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate.

  16. A Novel Hybrid Firefly Algorithm for Global Optimization

    PubMed Central

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    2016-01-01

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869

  17. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  18. Learning Cue Phrase Patterns from Radiology Reports Using a Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Robert M; Beckerman, Barbara G; Potok, Thomas E

    2009-01-01

    Various computer-assisted technologies have been developed to assist radiologists in detecting cancer; however, the algorithms still lack high degrees of sensitivity and specificity, and must undergo machine learning against a training set with known pathologies in order to further refine the algorithms with higher validity of truth. This work describes an approach to learning cue phrase patterns in radiology reports that utilizes a genetic algorithm (GA) as the learning method. The approach described here successfully learned cue phrase patterns for two distinct classes of radiology reports. These patterns can then be used as a basis for automatically categorizing, clustering, ormore » retrieving relevant data for the user.« less

  19. Existence and discrete approximation for optimization problems governed by fractional differential equations

    NASA Astrophysics Data System (ADS)

    Bai, Yunru; Baleanu, Dumitru; Wu, Guo-Cheng

    2018-06-01

    We investigate a class of generalized differential optimization problems driven by the Caputo derivative. Existence of weak Carathe ´odory solution is proved by using Weierstrass existence theorem, fixed point theorem and Filippov implicit function lemma etc. Then a numerical approximation algorithm is introduced, and a convergence theorem is established. Finally, a nonlinear programming problem constrained by the fractional differential equation is illustrated and the results verify the validity of the algorithm.

  20. [Algorithm for the differential diagnosis of precancerous and regenerative changes in the cervix uteri].

    PubMed

    Sazonova, V Iu; Fedorova, V E; Danilova, N V

    2013-01-01

    Pretumoral changes in the epithelium of the cervix uteri include cervical intraepithelial neoplasia (CIN). CIN III should be differentiated with regenerative changes during epidermization of endocervicoses. Epidermization is proliferation of undifferentiated reserve cells that differentiate towards the squamous epithelium, by superseding the ectopic endocervical glandular epithelium. This process was called immature squamous metaplasia (ISM). The objective of the investigation was to define the significance of different morphological signs in the differential diagnosis of CIN III and ISM. One hundred and twelve cervical, CIN III, and immature squamous metaplasia biopsies were selected for examination. The selected cervical specimens were divided into 2 groups according to the presence or absence of p16 and CK17 expression. The p16+, CK17- cases were taken as true CIN III and the pl 6-, CK17+ as a regenerative process. The basis for this investigation is the signs included by O.K. Khmelnitsky into an algorithm for the differential diagnosis of epidermizing pseudoerosion and intraepithelial cancer of the cervix uteri. The algorithm was reconsidered to objectify. The investigation established great differences in the number of significant mitoses in the study groups. A clear trend was found for differences in the number of acanthotic strands. A new differential diagnostic algorithm for CIN III and ISM, which included the number of significant mitoses and acanthotic strands and p16 and CK17 expression, was proposed.

  1. Mining Co-Location Patterns with Clustering Items from Spatial Data Sets

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Li, Q.; Deng, G.; Yue, T.; Zhou, X.

    2018-05-01

    The explosive growth of spatial data and widespread use of spatial databases emphasize the need for the spatial data mining. Co-location patterns discovery is an important branch in spatial data mining. Spatial co-locations represent the subsets of features which are frequently located together in geographic space. However, the appearance of a spatial feature C is often not determined by a single spatial feature A or B but by the two spatial features A and B, that is to say where A and B appear together, C often appears. We note that this co-location pattern is different from the traditional co-location pattern. Thus, this paper presents a new concept called clustering terms, and this co-location pattern is called co-location patterns with clustering items. And the traditional algorithm cannot mine this co-location pattern, so we introduce the related concept in detail and propose a novel algorithm. This algorithm is extended by join-based approach proposed by Huang. Finally, we evaluate the performance of this algorithm.

  2. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  3. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  4. A reconstruction method for cone-beam differential x-ray phase-contrast computed tomography.

    PubMed

    Fu, Jian; Velroyen, Astrid; Tan, Renbo; Zhang, Junwei; Chen, Liyuan; Tapfer, Arne; Bech, Martin; Pfeiffer, Franz

    2012-09-10

    Most existing differential phase-contrast computed tomography (DPC-CT) approaches are based on three kinds of scanning geometries, described by parallel-beam, fan-beam and cone-beam. Due to the potential of compact imaging systems with magnified spatial resolution, cone-beam DPC-CT has attracted significant interest. In this paper, we report a reconstruction method based on a back-projection filtration (BPF) algorithm for cone-beam DPC-CT. Due to the differential nature of phase contrast projections, the algorithm restrains from differentiation of the projection data prior to back-projection, unlike BPF algorithms commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a micro-focus x-ray tube source. Moreover, the numerical simulation and experimental results demonstrate that the proposed method can deal with several classes of truncated cone-beam datasets. We believe that this feature is of particular interest for future medical cone-beam phase-contrast CT imaging applications.

  5. Differentially Private Frequent Subgraph Mining

    PubMed Central

    Xu, Shengzhi; Xiong, Li; Cheng, Xiang; Xiao, Ke

    2016-01-01

    Mining frequent subgraphs from a collection of input graphs is an important topic in data mining research. However, if the input graphs contain sensitive information, releasing frequent subgraphs may pose considerable threats to individual's privacy. In this paper, we study the problem of frequent subgraph mining (FGM) under the rigorous differential privacy model. We introduce a novel differentially private FGM algorithm, which is referred to as DFG. In this algorithm, we first privately identify frequent subgraphs from input graphs, and then compute the noisy support of each identified frequent subgraph. In particular, to privately identify frequent subgraphs, we present a frequent subgraph identification approach which can improve the utility of frequent subgraph identifications through candidates pruning. Moreover, to compute the noisy support of each identified frequent subgraph, we devise a lattice-based noisy support derivation approach, where a series of methods has been proposed to improve the accuracy of the noisy supports. Through formal privacy analysis, we prove that our DFG algorithm satisfies ε-differential privacy. Extensive experimental results on real datasets show that the DFG algorithm can privately find frequent subgraphs with high data utility. PMID:27616876

  6. Simulation of quantum dynamics based on the quantum stochastic differential equation.

    PubMed

    Li, Ming

    2013-01-01

    The quantum stochastic differential equation derived from the Lindblad form quantum master equation is investigated. The general formulation in terms of environment operators representing the quantum state diffusion is given. The numerical simulation algorithm of stochastic process of direct photodetection of a driven two-level system for the predictions of the dynamical behavior is proposed. The effectiveness and superiority of the algorithm are verified by the performance analysis of the accuracy and the computational cost in comparison with the classical Runge-Kutta algorithm.

  7. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.

    PubMed

    Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.

  8. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol

    PubMed Central

    Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157

  9. Application of Approximate Pattern Matching in Two Dimensional Spaces to Grid Layout for Biochemical Network Maps

    PubMed Central

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486

  10. Application of approximate pattern matching in two dimensional spaces to grid layout for biochemical network maps.

    PubMed

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.

  11. Traction patterns of tumor cells.

    PubMed

    Ambrosi, D; Duperray, A; Peschetola, V; Verdier, C

    2009-01-01

    The traction exerted by a cell on a planar deformable substrate can be indirectly obtained on the basis of the displacement field of the underlying layer. The usual methodology used to address this inverse problem is based on the exploitation of the Green tensor of the linear elasticity problem in a half space (Boussinesq problem), coupled with a minimization algorithm under force penalization. A possible alternative strategy is to exploit an adjoint equation, obtained on the basis of a suitable minimization requirement. The resulting system of coupled elliptic partial differential equations is applied here to determine the force field per unit surface generated by T24 tumor cells on a polyacrylamide substrate. The shear stress obtained by numerical integration provides quantitative insight of the traction field and is a promising tool to investigate the spatial pattern of force per unit surface generated in cell motion, particularly in the case of such cancer cells.

  12. Modelling Evolutionary Algorithms with Stochastic Differential Equations.

    PubMed

    Heredia, Jorge Pérez

    2017-11-20

    There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.

  13. Improved Differentiation of Streptococcus pneumoniae and Other S. mitis Group Streptococci by MALDI Biotyper Using an Improved MALDI Biotyper Database Content and a Novel Result Interpretation Algorithm.

    PubMed

    Harju, Inka; Lange, Christoph; Kostrzewa, Markus; Maier, Thomas; Rantakokko-Jalava, Kaisu; Haanperä, Marjo

    2017-03-01

    Reliable distinction of Streptococcus pneumoniae and viridans group streptococci is important because of the different pathogenic properties of these organisms. Differentiation between S. pneumoniae and closely related Sreptococcus mitis species group streptococci has always been challenging, even when using such modern methods as 16S rRNA gene sequencing or matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry. In this study, a novel algorithm combined with an enhanced database was evaluated for differentiation between S. pneumoniae and S. mitis species group streptococci. One hundred one clinical S. mitis species group streptococcal strains and 188 clinical S. pneumoniae strains were identified by both the standard MALDI Biotyper database alone and that combined with a novel algorithm. The database update from 4,613 strains to 5,627 strains drastically improved the differentiation of S. pneumoniae and S. mitis species group streptococci: when the new database version containing 5,627 strains was used, only one of the 101 S. mitis species group isolates was misidentified as S. pneumoniae , whereas 66 of them were misidentified as S. pneumoniae when the earlier 4,613-strain MALDI Biotyper database version was used. The updated MALDI Biotyper database combined with the novel algorithm showed even better performance, producing no misidentifications of the S. mitis species group strains as S. pneumoniae All S. pneumoniae strains were correctly identified as S. pneumoniae with both the standard MALDI Biotyper database and the standard MALDI Biotyper database combined with the novel algorithm. This new algorithm thus enables reliable differentiation between pneumococci and other S. mitis species group streptococci with the MALDI Biotyper. Copyright © 2017 American Society for Microbiology.

  14. Hybrid intelligent optimization methods for engineering problems

    NASA Astrophysics Data System (ADS)

    Pehlivanoglu, Yasin Volkan

    The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and quantification studies, we improved new mutation strategies and operators to provide beneficial diversity within the population. We called this new approach as multi-frequency vibrational GA or PSO. They were applied to different aeronautical engineering problems in order to study the efficiency of these new approaches. These implementations were: applications to selected benchmark test functions, inverse design of two-dimensional (2D) airfoil in subsonic flow, optimization of 2D airfoil in transonic flow, path planning problems of autonomous unmanned aerial vehicle (UAV) over a 3D terrain environment, 3D radar cross section minimization problem for a 3D air vehicle, and active flow control over a 2D airfoil. As demonstrated by these test cases, we observed that new algorithms outperform the current popular algorithms. The principal role of this multi-frequency approach was to determine which individuals or particles should be mutated, when they should be mutated, and which ones should be merged into the population. The new mutation operators, when combined with a mutation strategy and an artificial intelligent method, such as, neural networks or fuzzy logic process, they provided local and global diversities during the reproduction phases of the generations. Additionally, the new approach also introduced random and controlled diversity. Due to still being population-based techniques, these methods were as robust as the plain GA or PSO algorithms. Based on the results obtained, it was concluded that the variants of the present multi-frequency vibrational GA and PSO were efficient algorithms, since they successfully avoided all local optima within relatively short optimization cycles.

  15. Parallel Algorithms and Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  16. Best practices recommendations in the application of immunohistochemistry in testicular tumors: report from the International Society of Urological Pathology consensus conference.

    PubMed

    Ulbright, Thomas M; Tickoo, Satish K; Berney, Daniel M; Srigley, John R

    2014-08-01

    The judicious use of immunostains can be of significant diagnostic assistance in the interpretation of testicular neoplasms when the light microscopic features are ambiguous. A limited differential diagnosis by traditional morphology is required for the effective use of immunohistochemistry (IHC); otherwise, the inevitable occurrence of exceptions to anticipated patterns will lead to "immunoconfusion." The diagnosis of tumors in the germ cell lineage, the great majority of primary tumors of the testis, has been considerably facilitated over the past decade by IHC directed at developmentally important nuclear transcription factors, including OCT4, SALL4, SOX2, and SOX17, that are mostly restricted to certain tumor histotypes. In conjunction with other markers, a specific diagnosis can be achieved in most instances through a panel of 3 or 4 immunostains and often fewer. IHC among tumors in the sex cord-stromal group may produce a significant proportion of false-negative cases until more sensitive and equally specific markers are validated. The negativity of these tumors for the IHC stains used for germ cell tumors is key in the important distinction of neoplasms in these 2 general categories. In this review, the International Society of Urological Pathologists (ISUP) provides diagnostic guidelines in the form of algorithms to assist practicing pathologists confronting a differential diagnostic question concerning a testicular neoplasm. The goal of ISUP is to anticipate commonly encountered differential diagnoses and recommend an efficient and limited pattern of IHC stains to resolve the question.

  17. SPMBR: a scalable algorithm for mining sequential patterns based on bitmaps

    NASA Astrophysics Data System (ADS)

    Xu, Xiwei; Zhang, Changhai

    2013-12-01

    Now some sequential patterns mining algorithms generate too many candidate sequences, and increase the processing cost of support counting. Therefore, we present an effective and scalable algorithm called SPMBR (Sequential Patterns Mining based on Bitmap Representation) to solve the problem of mining the sequential patterns for large databases. Our method differs from previous related works of mining sequential patterns. The main difference is that the database of sequential patterns is represented by bitmaps, and a simplified bitmap structure is presented firstly. In this paper, First the algorithm generate candidate sequences by SE(Sequence Extension) and IE(Item Extension), and then obtain all frequent sequences by comparing the original bitmap and the extended item bitmap .This method could simplify the problem of mining the sequential patterns and avoid the high processing cost of support counting. Both theories and experiments indicate that the performance of SPMBR is predominant for large transaction databases, the required memory size for storing temporal data is much less during mining process, and all sequential patterns can be mined with feasibility.

  18. Early Obstacle Detection and Avoidance for All to All Traffic Pattern in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Huc, Florian; Jarry, Aubin; Leone, Pierre; Moraru, Luminita; Nikoletseas, Sotiris; Rolim, Jose

    This paper deals with early obstacles recognition in wireless sensor networks under various traffic patterns. In the presence of obstacles, the efficiency of routing algorithms is increased by voluntarily avoiding some regions in the vicinity of obstacles, areas which we call dead-ends. In this paper, we first propose a fast convergent routing algorithm with proactive dead-end detection together with a formal definition and description of dead-ends. Secondly, we present a generalization of this algorithm which improves performances in all to many and all to all traffic patterns. In a third part we prove that this algorithm produces paths that are optimal up to a constant factor of 2π + 1. In a fourth part we consider the reactive version of the algorithm which is an extension of a previously known early obstacle detection algorithm. Finally we give experimental results to illustrate the efficiency of our algorithms in different scenarios.

  19. Multiobjective Aerodynamic Shape Optimization Using Pareto Differential Evolution and Generalized Response Surface Metamodels

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.

  20. The explicit computation of integration algorithms and first integrals for ordinary differential equations with polynomials coefficients using trees

    NASA Technical Reports Server (NTRS)

    Crouch, P. E.; Grossman, Robert

    1992-01-01

    This note is concerned with the explicit symbolic computation of expressions involving differential operators and their actions on functions. The derivation of specialized numerical algorithms, the explicit symbolic computation of integrals of motion, and the explicit computation of normal forms for nonlinear systems all require such computations. More precisely, if R = k(x(sub 1),...,x(sub N)), where k = R or C, F denotes a differential operator with coefficients from R, and g member of R, we describe data structures and algorithms for efficiently computing g. The basic idea is to impose a multiplicative structure on the vector space with basis the set of finite rooted trees and whose nodes are labeled with the coefficients of the differential operators. Cancellations of two trees with r + 1 nodes translates into cancellation of O(N(exp r)) expressions involving the coefficient functions and their derivatives.

  1. Using trees to compute approximate solutions to ordinary differential equations exactly

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1991-01-01

    Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.

  2. Mesh-free based variational level set evolution for breast region segmentation and abnormality detection using mammograms.

    PubMed

    Kashyap, Kanchan L; Bajpai, Manish K; Khanna, Pritee; Giakos, George

    2018-01-01

    Automatic segmentation of abnormal region is a crucial task in computer-aided detection system using mammograms. In this work, an automatic abnormality detection algorithm using mammographic images is proposed. In the preprocessing step, partial differential equation-based variational level set method is used for breast region extraction. The evolution of the level set method is done by applying mesh-free-based radial basis function (RBF). The limitation of mesh-based approach is removed by using mesh-free-based RBF method. The evolution of variational level set function is also done by mesh-based finite difference method for comparison purpose. Unsharp masking and median filtering is used for mammogram enhancement. Suspicious abnormal regions are segmented by applying fuzzy c-means clustering. Texture features are extracted from the segmented suspicious regions by computing local binary pattern and dominated rotated local binary pattern (DRLBP). Finally, suspicious regions are classified as normal or abnormal regions by means of support vector machine with linear, multilayer perceptron, radial basis, and polynomial kernel function. The algorithm is validated on 322 sample mammograms of mammographic image analysis society (MIAS) and 500 mammograms from digital database for screening mammography (DDSM) datasets. Proficiency of the algorithm is quantified by using sensitivity, specificity, and accuracy. The highest sensitivity, specificity, and accuracy of 93.96%, 95.01%, and 94.48%, respectively, are obtained on MIAS dataset using DRLBP feature with RBF kernel function. Whereas, the highest 92.31% sensitivity, 98.45% specificity, and 96.21% accuracy are achieved on DDSM dataset using DRLBP feature with RBF kernel function. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Design and analysis of tilt integral derivative controller with filter for load frequency control of multi-area interconnected power systems.

    PubMed

    Kumar Sahu, Rabindra; Panda, Sidhartha; Biswal, Ashutosh; Chandra Sekhar, G T

    2016-03-01

    In this paper, a novel Tilt Integral Derivative controller with Filter (TIDF) is proposed for Load Frequency Control (LFC) of multi-area power systems. Initially, a two-area power system is considered and the parameters of the TIDF controller are optimized using Differential Evolution (DE) algorithm employing an Integral of Time multiplied Absolute Error (ITAE) criterion. The superiority of the proposed approach is demonstrated by comparing the results with some recently published heuristic approaches such as Firefly Algorithm (FA), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) optimized PID controllers for the same interconnected power system. Investigations reveal that proposed TIDF controllers provide better dynamic response compared to PID controller in terms of minimum undershoots and settling times of frequency as well as tie-line power deviations following a disturbance. The proposed approach is also extended to two widely used three area test systems considering nonlinearities such as Generation Rate Constraint (GRC) and Governor Dead Band (GDB). To improve the performance of the system, a Thyristor Controlled Series Compensator (TCSC) is also considered and the performance of TIDF controller in presence of TCSC is investigated. It is observed that system performance improves with the inclusion of TCSC. Finally, sensitivity analysis is carried out to test the robustness of the proposed controller by varying the system parameters, operating condition and load pattern. It is observed that the proposed controllers are robust and perform satisfactorily with variations in operating condition, system parameters and load pattern. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A street rubbish detection algorithm based on Sift and RCNN

    NASA Astrophysics Data System (ADS)

    Yu, XiPeng; Chen, Zhong; Zhang, Shuo; Zhang, Ting

    2018-02-01

    This paper presents a street rubbish detection algorithm based on image registration with Sift feature and RCNN. Firstly, obtain the rubbish region proposal on the real-time street image and set up the CNN convolution neural network trained by the rubbish samples set consists of rubbish and non-rubbish images; Secondly, for every clean street image, obtain the Sift feature and do image registration with the real-time street image to obtain the differential image, the differential image filters a lot of background information, obtain the rubbish region proposal rect where the rubbish may appear on the differential image by the selective search algorithm. Then, the CNN model is used to detect the image pixel data in each of the region proposal on the real-time street image. According to the output vector of the CNN, it is judged whether the rubbish is in the region proposal or not. If it is rubbish, the region proposal on the real-time street image is marked. This algorithm avoids the large number of false detection caused by the detection on the whole image because the CNN is used to identify the image only in the region proposal on the real-time street image that may appear rubbish. Different from the traditional object detection algorithm based on the region proposal, the region proposal is obtained on the differential image not whole real-time street image, and the number of the invalid region proposal is greatly reduced. The algorithm has the high mean average precision (mAP).

  5. PASSion: a pattern growth algorithm-based pipeline for splice junction detection in paired-end RNA-Seq data

    PubMed Central

    Zhang, Yanju; Lameijer, Eric-Wubbo; 't Hoen, Peter A. C.; Ning, Zemin; Slagboom, P. Eline; Ye, Kai

    2012-01-01

    Motivation: RNA-seq is a powerful technology for the study of transcriptome profiles that uses deep-sequencing technologies. Moreover, it may be used for cellular phenotyping and help establishing the etiology of diseases characterized by abnormal splicing patterns. In RNA-Seq, the exact nature of splicing events is buried in the reads that span exon–exon boundaries. The accurate and efficient mapping of these reads to the reference genome is a major challenge. Results: We developed PASSion, a pattern growth algorithm-based pipeline for splice site detection in paired-end RNA-Seq reads. Comparing the performance of PASSion to three existing RNA-Seq analysis pipelines, TopHat, MapSplice and HMMSplicer, revealed that PASSion is competitive with these packages. Moreover, the performance of PASSion is not affected by read length and coverage. It performs better than the other three approaches when detecting junctions in highly abundant transcripts. PASSion has the ability to detect junctions that do not have known splicing motifs, which cannot be found by the other tools. Of the two public RNA-Seq datasets, PASSion predicted ∼ 137 000 and 173 000 splicing events, of which on average 82 are known junctions annotated in the Ensembl transcript database and 18% are novel. In addition, our package can discover differential and shared splicing patterns among multiple samples. Availability: The code and utilities can be freely downloaded from https://trac.nbic.nl/passion and ftp://ftp.sanger.ac.uk/pub/zn1/passion Contact: y.zhang@lumc.nl; k.ye@lumc.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22219203

  6. An algorithm for the numerical solution of linear differential games

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polovinkin, E S; Ivanov, G E; Balashov, M V

    2001-10-31

    A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented andmore » estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.« less

  7. Evaluation and analysis of Seasat-A scanning multichannel Microwave Radiometer (SMMR) Antenna Pattern Correction (APC) algorithm

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    The brightness temperature data produced by the SMMR final Antenna Pattern Correction (APC) algorithm is discussed. The algorithm consisted of: (1) a direct comparison of the outputs of the final and interim APC algorithms; and (2) an analysis of a possible relationship between observed cross track gradients in the interim brightness temperatures and the asymmetry in the antenna temperature data. Results indicate a bias between the brightness temperature produced by the final and interim APC algorithm.

  8. Network Sampling and Classification:An Investigation of Network Model Representations

    PubMed Central

    Airoldi, Edoardo M.; Bai, Xue; Carley, Kathleen M.

    2011-01-01

    Methods for generating a random sample of networks with desired properties are important tools for the analysis of social, biological, and information networks. Algorithm-based approaches to sampling networks have received a great deal of attention in recent literature. Most of these algorithms are based on simple intuitions that associate the full features of connectivity patterns with specific values of only one or two network metrics. Substantive conclusions are crucially dependent on this association holding true. However, the extent to which this simple intuition holds true is not yet known. In this paper, we examine the association between the connectivity patterns that a network sampling algorithm aims to generate and the connectivity patterns of the generated networks, measured by an existing set of popular network metrics. We find that different network sampling algorithms can yield networks with similar connectivity patterns. We also find that the alternative algorithms for the same connectivity pattern can yield networks with different connectivity patterns. We argue that conclusions based on simulated network studies must focus on the full features of the connectivity patterns of a network instead of on the limited set of network metrics for a specific network type. This fact has important implications for network data analysis: for instance, implications related to the way significance is currently assessed. PMID:21666773

  9. Asymptotic integration algorithms for nonhomogeneous, nonlinear, first order, ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Walker, K. P.; Freed, A. D.

    1991-01-01

    New methods for integrating systems of stiff, nonlinear, first order, ordinary differential equations are developed by casting the differential equations into integral form. Nonlinear recursive relations are obtained that allow the solution to a system of equations at time t plus delta t to be obtained in terms of the solution at time t in explicit and implicit forms. Examples of accuracy obtained with the new technique are given by considering systems of nonlinear, first order equations which arise in the study of unified models of viscoplastic behaviors, the spread of the AIDS virus, and predator-prey populations. In general, the new implicit algorithm is unconditionally stable, and has a Jacobian of smaller dimension than that which is acquired by current implicit methods, such as the Euler backward difference algorithm; yet, it gives superior accuracy. The asymptotic explicit and implicit algorithms are suitable for solutions that are of the growing and decaying exponential kinds, respectively, whilst the implicit Euler-Maclaurin algorithm is superior when the solution oscillates, i.e., when there are regions in which both growing and decaying exponential solutions exist.

  10. Algorithm refinement for stochastic partial differential equations: II. Correlated systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.

    2005-08-10

    We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less

  11. Ripple/Carcinoid pattern sebaceoma with apocrine differentiation.

    PubMed

    Misago, Noriyuki; Narisawa, Yutaka

    2011-02-01

    Sebaceoma is a benign sebaceous neoplasm, which has been reported to show characteristic growth patterns, such as, ripple, labyrinthine/sinusoidal, and carcinoid-like patterns. Another recent finding regarding in sebaceoma is the observation of apocrine differentiation within the sebaceoma lesion. This report describes a case of carcinoid (a partial ripple and labyrinthine) pattern sebaceoma with apocrine differentiation with a literature review and immunohistochemical studies. The various characteristic growth patterns in sebaceoma were suggested to simply be variations of the same growth pattern arranged in cords, namely, a unified term "ripple/carcinoid pattern." The primitive sebaceous germinative cells in sebaceoma may still have the ability to undergo apocrine differentiation. Most of the reports so far on sebaceoma with apocrine differentiation, including the present case, describe a ripple/carcinoid pattern, thus suggesting that ripple/carcinoid pattern sebaceoma is composed of more primitive sebaceous germinative cells than conventional sebaceoma.

  12. Quality detection system and method of micro-accessory based on microscopic vision

    NASA Astrophysics Data System (ADS)

    Li, Dongjie; Wang, Shiwei; Fu, Yu

    2017-10-01

    Considering that the traditional manual detection of micro-accessory has some problems, such as heavy workload, low efficiency and large artificial error, a kind of quality inspection system of micro-accessory has been designed. Micro-vision technology has been used to inspect quality, which optimizes the structure of the detection system. The stepper motor is used to drive the rotating micro-platform to transfer quarantine device and the microscopic vision system is applied to get graphic information of micro-accessory. The methods of image processing and pattern matching, the variable scale Sobel differential edge detection algorithm and the improved Zernike moments sub-pixel edge detection algorithm are combined in the system in order to achieve a more detailed and accurate edge of the defect detection. The grade at the edge of the complex signal can be achieved accurately by extracting through the proposed system, and then it can distinguish the qualified products and unqualified products with high precision recognition.

  13. FAST SIMULATION OF SOLID TUMORS THERMAL ABLATION TREATMENTS WITH A 3D REACTION DIFFUSION MODEL *

    PubMed Central

    BERTACCINI, DANIELE; CALVETTI, DANIELA

    2007-01-01

    An efficient computational method for near real-time simulation of thermal ablation of tumors via radio frequencies is proposed. Model simulations of the temperature field in a 3D portion of tissue containing the tumoral mass for different patterns of source heating can be used to design the ablation procedure. The availability of a very efficient computational scheme makes it possible update the predicted outcome of the procedure in real time. In the algorithms proposed here a discretization in space of the governing equations is followed by an adaptive time integration based on implicit multistep formulas. A modification of the ode15s MATLAB function which uses Krylov space iterative methods for the solution of for the linear systems arising at each integration step makes it possible to perform the simulations on standard desktop for much finer grids than using the built-in ode15s. The proposed algorithm can be applied to a wide class of nonlinear parabolic differential equations. PMID:17173888

  14. Pattern Discovery and Change Detection of Online Music Query Streams

    NASA Astrophysics Data System (ADS)

    Li, Hua-Fu

    In this paper, an efficient stream mining algorithm, called FTP-stream (Frequent Temporal Pattern mining of streams), is proposed to find the frequent temporal patterns over melody sequence streams. In the framework of our proposed algorithm, an effective bit-sequence representation is used to reduce the time and memory needed to slide the windows. The FTP-stream algorithm can calculate the support threshold in only a single pass based on the concept of bit-sequence representation. It takes the advantage of "left" and "and" operations of the representation. Experiments show that the proposed algorithm only scans the music query stream once, and runs significant faster and consumes less memory than existing algorithms, such as SWFI-stream and Moment.

  15. Analysis of L-band Multi-Channel Sea Clutter

    DTIC Science & Technology

    2010-08-01

    Some researchers found that the use of a hybrid algorithm of PS and GA could accelerate the convergence for array beamforming designs (Yeo and Lu...to be shown is array failure correction using the PS algorithm . Assume element 5 of a 32 half-wavelength spacing linear array is in failure. The goal... algorithm . The blue one is the 20 dB Chebyshev pattern and the template in red is the goal pattern to achieve. Two corrected beam patterns are

  16. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  17. Algorithmic transformation of multi-loop master integrals to a canonical basis with CANONICA

    NASA Astrophysics Data System (ADS)

    Meyer, Christoph

    2018-01-01

    The integration of differential equations of Feynman integrals can be greatly facilitated by using a canonical basis. This paper presents the Mathematica package CANONICA, which implements a recently developed algorithm to automatize the transformation to a canonical basis. This represents the first publicly available implementation suitable for differential equations depending on multiple scales. In addition to the presentation of the package, this paper extends the description of some aspects of the algorithm, including a proof of the uniqueness of canonical forms up to constant transformations.

  18. Differentially private distributed logistic regression using private and public data.

    PubMed

    Ji, Zhanglong; Jiang, Xiaoqian; Wang, Shuang; Xiong, Li; Ohno-Machado, Lucila

    2014-01-01

    Privacy protecting is an important issue in medical informatics and differential privacy is a state-of-the-art framework for data privacy research. Differential privacy offers provable privacy against attackers who have auxiliary information, and can be applied to data mining models (for example, logistic regression). However, differentially private methods sometimes introduce too much noise and make outputs less useful. Given available public data in medical research (e.g. from patients who sign open-consent agreements), we can design algorithms that use both public and private data sets to decrease the amount of noise that is introduced. In this paper, we modify the update step in Newton-Raphson method to propose a differentially private distributed logistic regression model based on both public and private data. We try our algorithm on three different data sets, and show its advantage over: (1) a logistic regression model based solely on public data, and (2) a differentially private distributed logistic regression model based on private data under various scenarios. Logistic regression models built with our new algorithm based on both private and public datasets demonstrate better utility than models that trained on private or public datasets alone without sacrificing the rigorous privacy guarantee.

  19. Identification of autism spectrum disorder using deep learning and the ABIDE dataset.

    PubMed

    Heinsfeld, Anibal Sólon; Franco, Alexandre Rosa; Craddock, R Cameron; Buchweitz, Augusto; Meneguzzi, Felipe

    2018-01-01

    The goal of the present study was to apply deep learning algorithms to identify autism spectrum disorder (ASD) patients from large brain imaging dataset, based solely on the patients brain activation patterns. We investigated ASD patients brain imaging data from a world-wide multi-site database known as ABIDE (Autism Brain Imaging Data Exchange). ASD is a brain-based disorder characterized by social deficits and repetitive behaviors. According to recent Centers for Disease Control data, ASD affects one in 68 children in the United States. We investigated patterns of functional connectivity that objectively identify ASD participants from functional brain imaging data, and attempted to unveil the neural patterns that emerged from the classification. The results improved the state-of-the-art by achieving 70% accuracy in identification of ASD versus control patients in the dataset. The patterns that emerged from the classification show an anticorrelation of brain function between anterior and posterior areas of the brain; the anticorrelation corroborates current empirical evidence of anterior-posterior disruption in brain connectivity in ASD. We present the results and identify the areas of the brain that contributed most to differentiating ASD from typically developing controls as per our deep learning model.

  20. Two-dimensional wavelet analysis based classification of gas chromatogram differential mobility spectrometry signals.

    PubMed

    Zhao, Weixiang; Sankaran, Shankar; Ibáñez, Ana M; Dandekar, Abhaya M; Davis, Cristina E

    2009-08-04

    This study introduces two-dimensional (2-D) wavelet analysis to the classification of gas chromatogram differential mobility spectrometry (GC/DMS) data which are composed of retention time, compensation voltage, and corresponding intensities. One reported method to process such large data sets is to convert 2-D signals to 1-D signals by summing intensities either across retention time or compensation voltage, but it can lose important signal information in one data dimension. A 2-D wavelet analysis approach keeps the 2-D structure of original signals, while significantly reducing data size. We applied this feature extraction method to 2-D GC/DMS signals measured from control and disordered fruit and then employed two typical classification algorithms to testify the effects of the resultant features on chemical pattern recognition. Yielding a 93.3% accuracy of separating data from control and disordered fruit samples, 2-D wavelet analysis not only proves its feasibility to extract feature from original 2-D signals but also shows its superiority over the conventional feature extraction methods including converting 2-D to 1-D and selecting distinguishable pixels from training set. Furthermore, this process does not require coupling with specific pattern recognition methods, which may help ensure wide applications of this method to 2-D spectrometry data.

  1. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  2. A new algorithm for finding survival coefficients employed in reliability equations

    NASA Technical Reports Server (NTRS)

    Bouricius, W. G.; Flehinger, B. J.

    1973-01-01

    Product reliabilities are predicted from past failure rates and reasonable estimate of future failure rates. Algorithm is used to calculate probability that product will function correctly. Algorithm sums the probabilities of each survival pattern and number of permutations for that pattern, over all possible ways in which product can survive.

  3. Inherent smoothness of intensity patterns for intensity modulated radiation therapy generated by simultaneous projection algorithms

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.

    2004-07-01

    The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.

  4. Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brendel, Bernhard, E-mail: bernhard.brendel@philips.com; Teuffenbach, Maximilian von; Noël, Peter B.

    2016-01-15

    Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penaltymore » comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.« less

  5. GeneAnalytics: An Integrative Gene Set Analysis Tool for Next Generation Sequencing, RNAseq and Microarray Data.

    PubMed

    Ben-Ari Fuchs, Shani; Lieder, Iris; Stelzer, Gil; Mazor, Yaron; Buzhor, Ella; Kaplan, Sergey; Bogoch, Yoel; Plaschkes, Inbar; Shitrit, Alina; Rappaport, Noa; Kohn, Asher; Edgar, Ron; Shenhav, Liraz; Safran, Marilyn; Lancet, Doron; Guan-Golan, Yaron; Warshawsky, David; Shtrichman, Ronit

    2016-03-01

    Postgenomics data are produced in large volumes by life sciences and clinical applications of novel omics diagnostics and therapeutics for precision medicine. To move from "data-to-knowledge-to-innovation," a crucial missing step in the current era is, however, our limited understanding of biological and clinical contexts associated with data. Prominent among the emerging remedies to this challenge are the gene set enrichment tools. This study reports on GeneAnalytics™ ( geneanalytics.genecards.org ), a comprehensive and easy-to-apply gene set analysis tool for rapid contextualization of expression patterns and functional signatures embedded in the postgenomics Big Data domains, such as Next Generation Sequencing (NGS), RNAseq, and microarray experiments. GeneAnalytics' differentiating features include in-depth evidence-based scoring algorithms, an intuitive user interface and proprietary unified data. GeneAnalytics employs the LifeMap Science's GeneCards suite, including the GeneCards®--the human gene database; the MalaCards-the human diseases database; and the PathCards--the biological pathways database. Expression-based analysis in GeneAnalytics relies on the LifeMap Discovery®--the embryonic development and stem cells database, which includes manually curated expression data for normal and diseased tissues, enabling advanced matching algorithm for gene-tissue association. This assists in evaluating differentiation protocols and discovering biomarkers for tissues and cells. Results are directly linked to gene, disease, or cell "cards" in the GeneCards suite. Future developments aim to enhance the GeneAnalytics algorithm as well as visualizations, employing varied graphical display items. Such attributes make GeneAnalytics a broadly applicable postgenomics data analyses and interpretation tool for translation of data to knowledge-based innovation in various Big Data fields such as precision medicine, ecogenomics, nutrigenomics, pharmacogenomics, vaccinomics, and others yet to emerge on the postgenomics horizon.

  6. Cognitive Machine-Learning Algorithm for Cardiac Imaging: A Pilot Study for Differentiating Constrictive Pericarditis From Restrictive Cardiomyopathy.

    PubMed

    Sengupta, Partho P; Huang, Yen-Min; Bansal, Manish; Ashrafi, Ali; Fisher, Matt; Shameer, Khader; Gall, Walt; Dudley, Joel T

    2016-06-01

    Associating a patient's profile with the memories of prototypical patients built through previous repeat clinical experience is a key process in clinical judgment. We hypothesized that a similar process using a cognitive computing tool would be well suited for learning and recalling multidimensional attributes of speckle tracking echocardiography data sets derived from patients with known constrictive pericarditis and restrictive cardiomyopathy. Clinical and echocardiographic data of 50 patients with constrictive pericarditis and 44 with restrictive cardiomyopathy were used for developing an associative memory classifier-based machine-learning algorithm. The speckle tracking echocardiography data were normalized in reference to 47 controls with no structural heart disease, and the diagnostic area under the receiver operating characteristic curve of the associative memory classifier was evaluated for differentiating constrictive pericarditis from restrictive cardiomyopathy. Using only speckle tracking echocardiography variables, associative memory classifier achieved a diagnostic area under the curve of 89.2%, which improved to 96.2% with addition of 4 echocardiographic variables. In comparison, the area under the curve of early diastolic mitral annular velocity and left ventricular longitudinal strain were 82.1% and 63.7%, respectively. Furthermore, the associative memory classifier demonstrated greater accuracy and shorter learning curves than other machine-learning approaches, with accuracy asymptotically approaching 90% after a training fraction of 0.3 and remaining flat at higher training fractions. This study demonstrates feasibility of a cognitive machine-learning approach for learning and recalling patterns observed during echocardiographic evaluations. Incorporation of machine-learning algorithms in cardiac imaging may aid standardized assessments and support the quality of interpretations, particularly for novice readers with limited experience. © 2016 American Heart Association, Inc.

  7. Exploiting Sequential Patterns Found in Users' Solutions and Virtual Tutor Behavior to Improve Assistance in ITS

    ERIC Educational Resources Information Center

    Fournier-Viger, Philippe; Faghihi, Usef; Nkambou, Roger; Nguifo, Engelbert Mephu

    2010-01-01

    We propose to mine temporal patterns in Intelligent Tutoring Systems (ITSs) to uncover useful knowledge that can enhance their ability to provide assistance. To discover patterns, we suggest using a custom, sequential pattern-mining algorithm. Two ways of applying the algorithm to enhance an ITS's capabilities are addressed. The first is to…

  8. Blind Channel Equalization Using Constrained Generalized Pattern Search Optimization and Reinitialization Strategy

    NASA Astrophysics Data System (ADS)

    Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles

    2008-12-01

    We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.

  9. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  10. Optimization of the p-xylene oxidation process by a multi-objective differential evolution algorithm with adaptive parameters co-derived with the population-based incremental learning algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Zhan; Yan, Xuefeng

    2018-04-01

    Different operating conditions of p-xylene oxidation have different influences on the product, purified terephthalic acid. It is necessary to obtain the optimal combination of reaction conditions to ensure the quality of the products, cut down on consumption and increase revenues. A multi-objective differential evolution (MODE) algorithm co-evolved with the population-based incremental learning (PBIL) algorithm, called PBMODE, is proposed. The PBMODE algorithm was designed as a co-evolutionary system. Each individual has its own parameter individual, which is co-evolved by PBIL. PBIL uses statistical analysis to build a model based on the corresponding symbiotic individuals of the superior original individuals during the main evolutionary process. The results of simulations and statistical analysis indicate that the overall performance of the PBMODE algorithm is better than that of the compared algorithms and it can be used to optimize the operating conditions of the p-xylene oxidation process effectively and efficiently.

  11. Differential carrier phase recovery for QPSK optical coherent systems with integrated tunable lasers.

    PubMed

    Fatadin, Irshaad; Ives, David; Savory, Seb J

    2013-04-22

    The performance of a differential carrier phase recovery algorithm is investigated for the quadrature phase shift keying (QPSK) modulation format with an integrated tunable laser. The phase noise of the widely-tunable laser measured using a digital coherent receiver is shown to exhibit significant drift compared to a standard distributed feedback (DFB) laser due to enhanced low frequency noise component. The simulated performance of the differential algorithm is compared to the Viterbi-Viterbi phase estimation at different baud rates using the measured phase noise for the integrated tunable laser.

  12. Comparison of phase unwrapping algorithms for topography reconstruction based on digital speckle pattern interferometry

    NASA Astrophysics Data System (ADS)

    Li, Yuanbo; Cui, Xiaoqian; Wang, Hongbei; Zhao, Mengge; Ding, Hongbin

    2017-10-01

    Digital speckle pattern interferometry (DSPI) can diagnose the topography evolution in real-time, continuous and non-destructive, and has been considered as a most promising technique for Plasma-Facing Components (PFCs) topography diagnostic under the complicated environment of tokamak. It is important for the study of digital speckle pattern interferometry to enhance speckle patterns and obtain the real topography of the ablated crater. In this paper, two kinds of numerical model based on flood-fill algorithm has been developed to obtain the real profile by unwrapping from the wrapped phase in speckle interference pattern, which can be calculated through four intensity images by means of 4-step phase-shifting technique. During the process of phase unwrapping by means of flood-fill algorithm, since the existence of noise pollution, and other inevitable factors will lead to poor quality of the reconstruction results, this will have an impact on the authenticity of the restored topography. The calculation of the quality parameters was introduced to obtain the quality-map from the wrapped phase map, this work presents two different methods to calculate the quality parameters. Then quality parameters are used to guide the path of flood-fill algorithm, and the pixels with good quality parameters are given priority calculation, so that the quality of speckle interference pattern reconstruction results are improved. According to the comparison between the flood-fill algorithm which is suitable for speckle pattern interferometry and the quality-guided flood-fill algorithm (with two different calculation approaches), the errors which caused by noise pollution and the discontinuous of the strips were successfully reduced.

  13. High performance embedded system for real-time pattern matching

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-02-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device.

  14. GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns

    DOE PAGES

    Senin, Pavel; Lin, Jessica; Wang, Xing; ...

    2018-02-23

    The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less

  15. GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senin, Pavel; Lin, Jessica; Wang, Xing

    The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less

  16. Sensitivity analysis of multi-objective optimization of CPG parameters for quadruped robot locomotion

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina P.; Costa, Lino

    2012-09-01

    In this paper, a study based on sensitivity analysis is performed for a gait multi-objective optimization system that combines bio-inspired Central Patterns Generators (CPGs) and a multi-objective evolutionary algorithm based on NSGA-II. In this system, CPGs are modeled as autonomous differential equations, that generate the necessary limb movement to perform the required walking gait. In order to optimize the walking gait, a multi-objective problem with three conflicting objectives is formulated: maximization of the velocity, the wide stability margin and the behavioral diversity. The experimental results highlight the effectiveness of this multi-objective approach and the importance of the objectives to find different walking gait solutions for the quadruped robot.

  17. Two-Swim Operators in the Modified Bacterial Foraging Algorithm for the Optimal Synthesis of Four-Bar Mechanisms

    PubMed Central

    Hernández-Ocaña, Betania; Pozos-Parra, Ma. Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara

    2016-01-01

    This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem. PMID:27057156

  18. Two-Swim Operators in the Modified Bacterial Foraging Algorithm for the Optimal Synthesis of Four-Bar Mechanisms.

    PubMed

    Hernández-Ocaña, Betania; Pozos-Parra, Ma Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara

    2016-01-01

    This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem.

  19. Differential evolution-simulated annealing for multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Addawe, R. C.; Addawe, J. M.; Sueño, M. R. K.; Magadia, J. C.

    2017-10-01

    Multiple sequence alignments (MSA) are used in the analysis of molecular evolution and sequence structure relationships. In this paper, a hybrid algorithm, Differential Evolution - Simulated Annealing (DESA) is applied in optimizing multiple sequence alignments (MSAs) based on structural information, non-gaps percentage and totally conserved columns. DESA is a robust algorithm characterized by self-organization, mutation, crossover, and SA-like selection scheme of the strategy parameters. Here, the MSA problem is treated as a multi-objective optimization problem of the hybrid evolutionary algorithm, DESA. Thus, we name the algorithm as DESA-MSA. Simulated sequences and alignments were generated to evaluate the accuracy and efficiency of DESA-MSA using different indel sizes, sequence lengths, deletion rates and insertion rates. The proposed hybrid algorithm obtained acceptable solutions particularly for the MSA problem evaluated based on the three objectives.

  20. Detection of algorithmic trading

    NASA Astrophysics Data System (ADS)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  1. Geographically weighted regression as a generalized Wombling to detect barriers to gene flow.

    PubMed

    Diniz-Filho, José Alexandre Felizola; Soares, Thannya Nascimento; de Campos Telles, Mariana Pires

    2016-08-01

    Barriers to gene flow play an important role in structuring populations, especially in human-modified landscapes, and several methods have been proposed to detect such barriers. However, most applications of these methods require a relative large number of individuals or populations distributed in space, connected by vertices from Delaunay or Gabriel networks. Here we show, using both simulated and empirical data, a new application of geographically weighted regression (GWR) to detect such barriers, modeling the genetic variation as a "local" linear function of geographic coordinates (latitude and longitude). In the GWR, standard regression statistics, such as R(2) and slopes, are estimated for each sampling unit and thus are mapped. Peaks in these local statistics are then expected close to the barriers if genetic discontinuities exist, capturing a higher rate of population differentiation among neighboring populations. Isolation-by-Distance simulations on a longitudinally warped lattice revealed that higher local slopes from GWR coincide with the barrier detected with Monmonier algorithm. Even with a relatively small effect of the barrier, the power of local GWR in detecting the east-west barriers was higher than 95 %. We also analyzed empirical data of genetic differentiation among tree populations of Dipteryx alata and Eugenia dysenterica Brazilian Cerrado. GWR was applied to the principal coordinate of the pairwise FST matrix based on microsatellite loci. In both simulated and empirical data, the GWR results were consistent with discontinuities detected by Monmonier algorithm, as well as with previous explanations for the spatial patterns of genetic differentiation for the two species. Our analyses reveal how this new application of GWR can viewed as a generalized Wombling in a continuous space and be a useful approach to detect barriers and discontinuities to gene flow.

  2. Simulation and performance of an artificial retina for 40 MHz track reconstruction

    DOE PAGES

    Abba, A.; Bedeschi, F.; Citterio, M.; ...

    2015-03-05

    We present the results of a detailed simulation of the artificial retina pattern-recognition algorithm, designed to reconstruct events with hundreds of charged-particle tracks in pixel and silicon detectors at LHCb with LHC crossing frequency of 40 MHz. Performances of the artificial retina algorithm are assessed using the official Monte Carlo samples of the LHCb experiment. We found performances for the retina pattern-recognition algorithm comparable with the full LHCb reconstruction algorithm.

  3. Fault Tolerant Frequent Pattern Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan

    FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less

  4. Measurement of ultrasonic fields in transparent media using a scanning differential interferometer

    NASA Technical Reports Server (NTRS)

    Dockery, G. D.; Claus, R. O.

    1983-01-01

    An experimental system for the detection of three dimensional acoustic fields in optically transparent media using a dual beam differential interferometer is described. In this system, two coherent, parallel, focused laser beams are passed through the specimen and the interference fringe pattern which results when these beams are combined shifts linearly by an amount which is related to the optical pathlength difference between the two beams. It is shown that for small signals, the detector output is directly proportional to the amplitude of the acoustic field integrated along the optical beam path through the specimen. A water tank and motorized optical platform were constructed to allow these dual beams to be scanned through an ultrasonic field generated by a piezoelectric transducer at various distances from the transducer. Scan data for the near, Fresnel, and far zones of a uniform, circular transducer are presented and an algorithm for constructing the radial field profile from this integrated optical data, assuming cylindrical symmetry, is described.

  5. Automating the parallel processing of fluid and structural dynamics calculations

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  6. Optimal pattern distributions in Rete-based production systems

    NASA Technical Reports Server (NTRS)

    Scott, Stephen L.

    1994-01-01

    Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.

  7. An efficient, versatile and scalable pattern growth approach to mine frequent patterns in unaligned protein sequences.

    PubMed

    Ye, Kai; Kosters, Walter A; Ijzerman, Adriaan P

    2007-03-15

    Pattern discovery in protein sequences is often based on multiple sequence alignments (MSA). The procedure can be computationally intensive and often requires manual adjustment, which may be particularly difficult for a set of deviating sequences. In contrast, two algorithms, PRATT2 (http//www.ebi.ac.uk/pratt/) and TEIRESIAS (http://cbcsrv.watson.ibm.com/) are used to directly identify frequent patterns from unaligned biological sequences without an attempt to align them. Here we propose a new algorithm with more efficiency and more functionality than both PRATT2 and TEIRESIAS, and discuss some of its applications to G protein-coupled receptors, a protein family of important drug targets. In this study, we designed and implemented six algorithms to mine three different pattern types from either one or two datasets using a pattern growth approach. We compared our approach to PRATT2 and TEIRESIAS in efficiency, completeness and the diversity of pattern types. Compared to PRATT2, our approach is faster, capable of processing large datasets and able to identify the so-called type III patterns. Our approach is comparable to TEIRESIAS in the discovery of the so-called type I patterns but has additional functionality such as mining the so-called type II and type III patterns and finding discriminating patterns between two datasets. The source code for pattern growth algorithms and their pseudo-code are available at http://www.liacs.nl/home/kosters/pg/.

  8. Handling Dynamic Weights in Weighted Frequent Pattern Mining

    NASA Astrophysics Data System (ADS)

    Ahmed, Chowdhury Farhan; Tanbeer, Syed Khairuzzaman; Jeong, Byeong-Soo; Lee, Young-Koo

    Even though weighted frequent pattern (WFP) mining is more effective than traditional frequent pattern mining because it can consider different semantic significances (weights) of items, existing WFP algorithms assume that each item has a fixed weight. But in real world scenarios, the weight (price or significance) of an item can vary with time. Reflecting these changes in item weight is necessary in several mining applications, such as retail market data analysis and web click stream analysis. In this paper, we introduce the concept of a dynamic weight for each item, and propose an algorithm, DWFPM (dynamic weighted frequent pattern mining), that makes use of this concept. Our algorithm can address situations where the weight (price or significance) of an item varies dynamically. It exploits a pattern growth mining technique to avoid the level-wise candidate set generation-and-test methodology. Furthermore, it requires only one database scan, so it is eligible for use in stream data mining. An extensive performance analysis shows that our algorithm is efficient and scalable for WFP mining using dynamic weights.

  9. TOPTRAC: Topical Trajectory Pattern Mining

    PubMed Central

    Kim, Younghoon; Han, Jiawei; Yuan, Cangzhou

    2015-01-01

    With the increasing use of GPS-enabled mobile phones, geo-tagging, which refers to adding GPS information to media such as micro-blogging messages or photos, has seen a surge in popularity recently. This enables us to not only browse information based on locations, but also discover patterns in the location-based behaviors of users. Many techniques have been developed to find the patterns of people's movements using GPS data, but latent topics in text messages posted with local contexts have not been utilized effectively. In this paper, we present a latent topic-based clustering algorithm to discover patterns in the trajectories of geo-tagged text messages. We propose a novel probabilistic model to capture the semantic regions where people post messages with a coherent topic as well as the patterns of movement between the semantic regions. Based on the model, we develop an efficient inference algorithm to calculate model parameters. By exploiting the estimated model, we next devise a clustering algorithm to find the significant movement patterns that appear frequently in data. Our experiments on real-life data sets show that the proposed algorithm finds diverse and interesting trajectory patterns and identifies the semantic regions in a finer granularity than the traditional geographical clustering methods. PMID:26709365

  10. Differential Diagnosis of Erythmato-Squamous Diseases Using Classification and Regression Tree

    PubMed Central

    Maghooli, Keivan; Langarizadeh, Mostafa; Shahmoradi, Leila; Habibi-koolaee, Mahdi; Jebraeily, Mohamad; Bouraghi, Hamid

    2016-01-01

    Introduction: Differential diagnosis of Erythmato-Squamous Diseases (ESD) is a major challenge in the field of dermatology. The ESD diseases are placed into six different classes. Data mining is the process for detection of hidden patterns. In the case of ESD, data mining help us to predict the diseases. Different algorithms were developed for this purpose. Objective: we aimed to use the Classification and Regression Tree (CART) to predict differential diagnosis of ESD. Methods: we used the Cross Industry Standard Process for Data Mining (CRISP-DM) methodology. For this purpose, the dermatology data set from machine learning repository, UCI was obtained. The Clementine 12.0 software from IBM Company was used for modelling. In order to evaluation of the model we calculate the accuracy, sensitivity and specificity of the model. Results: The proposed model had an accuracy of 94.84% ( Standard Deviation: 24.42) in order to correct prediction of the ESD disease. Conclusions: Results indicated that using of this classifier could be useful. But, it would be strongly recommended that the combination of machine learning methods could be more useful in terms of prediction of ESD. PMID:28077889

  11. Time-ordered product expansions for computational stochastic system biology.

    PubMed

    Mjolsness, Eric

    2013-06-01

    The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.

  12. A novel dynamic wavelength bandwidth allocation scheme over OFDMA PONs

    NASA Astrophysics Data System (ADS)

    Yan, Bo; Guo, Wei; Jin, Yaohui; Hu, Weisheng

    2011-12-01

    With rapid growth of Internet applications, supporting differentiated service and enlarging system capacity have been new tasks for next generation access system. In recent years, research in OFDMA Passive Optical Networks (PON) has experienced extraordinary development as for its large capacity and flexibility in scheduling. Although much work has been done to solve hardware layer obstacles for OFDMA PON, scheduling algorithm on OFDMA PON system is still under primary discussion. In order to support QoS service on OFDMA PON system, a novel dynamic wavelength bandwidth allocation (DWBA) algorithm is proposed in this paper. Per-stream QoS service is supported in this algorithm. Through simulation, we proved our bandwidth allocation algorithm performs better in bandwidth utilization and differentiate service support.

  13. Noise-enhanced clustering and competitive learning algorithms.

    PubMed

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Decision Aids for Naval Air ASW

    DTIC Science & Technology

    1980-03-15

    Algorithm for Zone Optimization Investigation) NADC Developing Sonobuoy Pattern for Air ASW Search DAISY (Decision Aiding Information System) Wharton...sion making behavior. 0 Artificial intelligence sequential pattern recognition algorithm for reconstructing the decision maker’s utility functions. 0...display presenting the uncertainty area of the target. 3.1.5 Algorithm for Zone Optimization Investigation (AZOI) -- Naval Air Development Center 0 A

  15. Bayesian parameter estimation for nonlinear modelling of biological pathways.

    PubMed

    Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang

    2011-01-01

    The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.

  16. Routing performance analysis and optimization within a massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  17. Application of a Genetic Algorithm and Multi Agent System to Explore Emergent Patterns of Social Rationality and a Distress-Based Model for Deceit in the Workplace

    DTIC Science & Technology

    2008-06-01

    postponed the fulfillment of her own Masters Degree by at least 18 months so that I would have the opportunity to earn mine. She is smart , lovely...GENETIC ALGORITHM AND MULTI AGENT SYSTEM TO EXPLORE EMERGENT PATTERNS OF SOCIAL RATIONALITY AND A DISTRESS-BASED MODEL FOR DECEIT IN THE WORKPLACE...of a Genetic Algorithm and Mutli Agent System to Explore Emergent Patterns of Social Rationality and a Distress-Based Model for Deceit in the

  18. Numerical Differentiation of Noisy, Nonsmooth Data

    DOE PAGES

    Chartrand, Rick

    2011-01-01

    We consider the problem of differentiating a function specified by noisy data. Regularizing the differentiation process avoids the noise amplification of finite-difference methods. We use total-variation regularization, which allows for discontinuous solutions. The resulting simple algorithm accurately differentiates noisy functions, including those which have a discontinuous derivative.

  19. Comparison of Different Post-Processing Algorithms for Dynamic Susceptibility Contrast Perfusion Imaging of Cerebral Gliomas.

    PubMed

    Kudo, Kohsuke; Uwano, Ikuko; Hirai, Toshinori; Murakami, Ryuji; Nakamura, Hideo; Fujima, Noriyuki; Yamashita, Fumio; Goodwin, Jonathan; Higuchi, Satomi; Sasaki, Makoto

    2017-04-10

    The purpose of the present study was to compare different software algorithms for processing DSC perfusion images of cerebral tumors with respect to i) the relative CBV (rCBV) calculated, ii) the cutoff value for discriminating low- and high-grade gliomas, and iii) the diagnostic performance for differentiating these tumors. Following approval of institutional review board, informed consent was obtained from all patients. Thirty-five patients with primary glioma (grade II, 9; grade III, 8; and grade IV, 18 patients) were included. DSC perfusion imaging was performed with 3-Tesla MRI scanner. CBV maps were generated by using 11 different algorithms of four commercially available software and one academic program. rCBV of each tumor compared to normal white matter was calculated by ROI measurements. Differences in rCBV value were compared between algorithms for each tumor grade. Receiver operator characteristics analysis was conducted for the evaluation of diagnostic performance of different algorithms for differentiating between different grades. Several algorithms showed significant differences in rCBV, especially for grade IV tumors. When differentiating between low- (II) and high-grade (III/IV) tumors, the area under the ROC curve (Az) was similar (range 0.85-0.87), and there were no significant differences in Az between any pair of algorithms. In contrast, the optimal cutoff values varied between algorithms (range 4.18-6.53). rCBV values of tumor and cutoff values for discriminating low- and high-grade gliomas differed between software packages, suggesting that optimal software-specific cutoff values should be used for diagnosis of high-grade gliomas.

  20. Differentially private distributed logistic regression using private and public data

    PubMed Central

    2014-01-01

    Background Privacy protecting is an important issue in medical informatics and differential privacy is a state-of-the-art framework for data privacy research. Differential privacy offers provable privacy against attackers who have auxiliary information, and can be applied to data mining models (for example, logistic regression). However, differentially private methods sometimes introduce too much noise and make outputs less useful. Given available public data in medical research (e.g. from patients who sign open-consent agreements), we can design algorithms that use both public and private data sets to decrease the amount of noise that is introduced. Methodology In this paper, we modify the update step in Newton-Raphson method to propose a differentially private distributed logistic regression model based on both public and private data. Experiments and results We try our algorithm on three different data sets, and show its advantage over: (1) a logistic regression model based solely on public data, and (2) a differentially private distributed logistic regression model based on private data under various scenarios. Conclusion Logistic regression models built with our new algorithm based on both private and public datasets demonstrate better utility than models that trained on private or public datasets alone without sacrificing the rigorous privacy guarantee. PMID:25079786

  1. Adaptive cockroach swarm algorithm

    NASA Astrophysics Data System (ADS)

    Obagbuwa, Ibidun C.; Abidoye, Ademola P.

    2017-07-01

    An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.

  2. Gesture Based Control and EMG Decomposition

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Chang, Mindy H.; Knuth, Kevin H.

    2005-01-01

    This paper presents two probabilistic developments for use with Electromyograms (EMG). First described is a new-electric interface for virtual device control based on gesture recognition. The second development is a Bayesian method for decomposing EMG into individual motor unit action potentials. This more complex technique will then allow for higher resolution in separating muscle groups for gesture recognition. All examples presented rely upon sampling EMG data from a subject's forearm. The gesture based recognition uses pattern recognition software that has been trained to identify gestures from among a given set of gestures. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time from moving averages of EMG. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard. Moving averages of EMG do not provide easy distinction between fine muscle groups. To better distinguish between different fine motor skill muscle groups we present a Bayesian algorithm to separate surface EMG into representative motor unit action potentials. The algorithm is based upon differential Variable Component Analysis (dVCA) [l], [2] which was originally developed for Electroencephalograms. The algorithm uses a simple forward model representing a mixture of motor unit action potentials as seen across multiple channels. The parameters of this model are iteratively optimized for each component. Results are presented on both synthetic and experimental EMG data. The synthetic case has additive white noise and is compared with known components. The experimental EMG data was obtained using a custom linear electrode array designed for this study.

  3. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing

    PubMed Central

    St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.

    2012-01-01

    There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in response to sleep loss. PMID:22959616

  4. Sample-Based Motion Planning in High-Dimensional and Differentially-Constrained Systems

    DTIC Science & Technology

    2010-02-01

    Reachable Set . . . 88 6-1 LittleDog Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6-2 Dog bounding up stairs ...planning algorithm implemented on LittleDog, a quadruped robot . The motion planning algorithm successfully planned bounding trajectories over extremely...a motion planning algorithm implemented on LittleDog, a quadruped robot . The motion planning algorithm successfully planned bounding trajectories

  5. Comparison of Three Instructional Sequences for the Addition and Subtraction Algorithms. Technical Report 273.

    ERIC Educational Resources Information Center

    Wiles, Clyde A.

    The study's purpose was to investigate the differential effects on the achievement of second-grade students that could be attributed to three instructional sequences for the learning of the addition and subtraction algorithms. One sequence presented the addition algorithm first (AS), the second presented the subtraction algorithm first (SA), and…

  6. What to consider when pseudohypoparathyroidism is ruled out: iPPSD and differential diagnosis.

    PubMed

    Pereda, Arrate; Garin, Intza; Perez de Nanclares, Guiomar

    2018-03-02

    Pseudohypoparathyroidism (PHP) is a rare disease whose phenotypic features are rather difficult to identify in some cases. Thus, although these patients may present with the Albright's hereditary osteodystrophy (AHO) phenotype, which is characterized by small stature, obesity with a rounded face, subcutaneous ossifications, mental retardation and brachydactyly, its manifestations are somewhat variable. Indeed, some of them present with a complete phenotype, whereas others show only subtle manifestations. In addition, the features of the AHO phenotype are not specific to it and a similar phenotype is also commonly observed in other syndromes. Brachydactyly type E (BDE) is the most specific and objective feature of the AHO phenotype, and several genes have been associated with syndromic BDE in the past few years. Moreover, these syndromes have a skeletal and endocrinological phenotype that overlaps with AHO/PHP. In light of the above, we have developed an algorithm to aid in genetic testing of patients with clinical features of AHO but with no causative molecular defect at the GNAS locus. Starting with the feature of brachydactyly, this algorithm allows the differential diagnosis to be broadened and, with the addition of other clinical features, can guide genetic testing. We reviewed our series of patients (n = 23) with a clinical diagnosis of AHO and with brachydactyly type E or similar pattern, who were negative for GNAS anomalies, and classify them according to the diagnosis algorithm to finally propose and analyse the most probable gene(s) in each case. A review of the clinical data for our series of patients, and subsequent analysis of the candidate gene(s), allowed detection of the underlying molecular defect in 12 out of 23 patients: five patients harboured a mutation in PRKAR1A, one in PDE4D, four in TRPS1 and two in PTHLH. This study confirmed that the screening of other genes implicated in syndromes with BDE and AHO or a similar phenotype is very helpful for establishing a correct genetic diagnosis for those patients who have been misdiagnosed with "AHO-like phenotype" with an unknown genetic cause, and also for better describing the characteristic and differential features of these less common syndromes.

  7. Optimal Scaling of Digital Transcriptomes

    PubMed Central

    Glusman, Gustavo; Caballero, Juan; Robinson, Max; Kutlu, Burak; Hood, Leroy

    2013-01-01

    Deep sequencing of transcriptomes has become an indispensable tool for biology, enabling expression levels for thousands of genes to be compared across multiple samples. Since transcript counts scale with sequencing depth, counts from different samples must be normalized to a common scale prior to comparison. We analyzed fifteen existing and novel algorithms for normalizing transcript counts, and evaluated the effectiveness of the resulting normalizations. For this purpose we defined two novel and mutually independent metrics: (1) the number of “uniform” genes (genes whose normalized expression levels have a sufficiently low coefficient of variation), and (2) low Spearman correlation between normalized expression profiles of gene pairs. We also define four novel algorithms, one of which explicitly maximizes the number of uniform genes, and compared the performance of all fifteen algorithms. The two most commonly used methods (scaling to a fixed total value, or equalizing the expression of certain ‘housekeeping’ genes) yielded particularly poor results, surpassed even by normalization based on randomly selected gene sets. Conversely, seven of the algorithms approached what appears to be optimal normalization. Three of these algorithms rely on the identification of “ubiquitous” genes: genes expressed in all the samples studied, but never at very high or very low levels. We demonstrate that these include a “core” of genes expressed in many tissues in a mutually consistent pattern, which is suitable for use as an internal normalization guide. The new methods yield robustly normalized expression values, which is a prerequisite for the identification of differentially expressed and tissue-specific genes as potential biomarkers. PMID:24223126

  8. Surface modification of polydimethylsiloxane (PDMS) induced proliferation and neural-like cells differentiation of umbilical cord blood-derived mesenchymal stem cells.

    PubMed

    Kim, Sun-Jung; Lee, Jae Kyoo; Kim, Jin Won; Jung, Ji-Won; Seo, Kwangwon; Park, Sang-Bum; Roh, Kyung-Hwan; Lee, Sae-Rom; Hong, Yun Hwa; Kim, Sang Jeong; Lee, Yong-Soon; Kim, Sung June; Kang, Kyung-Sun

    2008-08-01

    Stem cell-based therapy has recently emerged for use in novel therapeutics for incurable diseases. For successful recovery from neurologic diseases, the most pivotal factor is differentiation and directed neuronal cell growth. In this study, we fabricated three different widths of a micro-pattern on polydimethylsiloxane (PDMS; 1, 2, and 4 microm). Surface modification of the PDMS was investigated for its capacity to manage proliferation and differentiation of neural-like cells from umbilical cord blood-derived mesenchymal stem cells (UCB-MSCs). Among the micro-patterned PDMS fabrications, the 1 microm-patterned PDMS significantly increased cell proliferation and most of the cells differentiated into neuronal cells. In addition, the 1 microm-patterned PDMS induced an increase in cytosolic calcium, while the differentiated cells on the flat and 4 microm-patterned PDMS had no response. PDMS with a 1 microm pattern was also aligned to direct orientation within 10 degrees angles. Taken together, micro-patterned PDMS supported UCB-MSC proliferation and induced neural like-cell differentiation. Our data suggest that micro-patterned PDMS might be a guiding method for stem cell therapy that would improve its therapeutic action in neurological diseases.

  9. Autoregressive statistical pattern recognition algorithms for damage detection in civil structures

    NASA Astrophysics Data System (ADS)

    Yao, Ruigen; Pakzad, Shamim N.

    2012-08-01

    Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.

  10. Differentiating Obstructive from Central and Complex Sleep Apnea Using an Automated Electrocardiogram-Based Method

    PubMed Central

    Thomas, Robert Joseph; Mietus, Joseph E.; Peng, Chung-Kang; Gilmartin, Geoffrey; Daly, Robert W.; Goldberger, Ary L.; Gottlieb, Daniel J.

    2007-01-01

    Study Objectives: Complex sleep apnea is defined as sleep disordered breathing secondary to simultaneous upper airway obstruction and respiratory control dysfunction. The objective of this study was to assess the utility of an electrocardiogram (ECG)-based cardiopulmonary coupling technique to distinguish obstructive from central or complex sleep apnea. Design: Analysis of archived polysomnographic datasets. Setting: A laboratory for computational signal analysis. Interventions: None. Measurements and Results: The PhysioNet Sleep Apnea Database, consisting of 70 polysomnograms including single-lead ECG signals of approximately 8 hours duration, was used to train an ECG-based measure of autonomic and respiratory interactions (cardiopulmonary coupling) to detect periods of apnea and hypopnea, based on the presence of elevated low-frequency coupling (e-LFC). In the PhysioNet BIDMC Congestive Heart Failure Database (ECGs of 15 subjects), a pattern of “narrow spectral band” e-LFC was especially common. The algorithm was then applied to the Sleep Heart Health Study–I dataset, to select the 15 records with the highest amounts of broad and narrow spectral band e-LFC. The latter spectral characteristic seemed to detect not only periods of central apnea, but also obstructive hypopneas with a periodic breathing pattern. Applying the algorithm to 77 sleep laboratory split-night studies showed that the presence of narrow band e-LFC predicted an increased sensitivity to induction of central apneas by positive airway pressure. Conclusions: ECG-based spectral analysis allows automated, operator-independent characterization of probable interactions between respiratory dyscontrol and upper airway anatomical obstruction. The clinical utility of spectrographic phenotyping, especially in predicting failure of positive airway pressure therapy, remains to be more thoroughly tested. Citation: Thomas RJ; Mietus JE; Peng CK; Gilmartin G; Daly RW; Goldberger AL; Gottlieb DJ. Differentiating obstructive from central and complex sleep apnea using an automated electrocardiogram-based method. SLEEP 2007;30(12):1756-1769. PMID:18246985

  11. From differential to difference equations for first order ODEs

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    When constructing an algorithm for the numerical integration of a differential equation, one should first convert the known ordinary differential equation (ODE) into an ordinary difference equation. Given this difference equation, one can develop an appropriate numerical algorithm. This technical note describes the derivation of two such ordinary difference equations applicable to a first order ODE. The implicit ordinary difference equation has the same asymptotic expansion as the ODE itself, whereas the explicit ordinary difference equation has an asymptotic that is similar in structure but different in value when compared with that of the ODE.

  12. Fast fringe pattern phase demodulation using FIR Hilbert transformers

    NASA Astrophysics Data System (ADS)

    Gdeisat, Munther; Burton, David; Lilley, Francis; Arevalillo-Herráez, Miguel

    2016-01-01

    This paper suggests the use of FIR Hilbert transformers to extract the phase of fringe patterns. This method is computationally faster than any known spatial method that produces wrapped phase maps. Also, the algorithm does not require any parameters to be adjusted which are dependent upon the specific fringe pattern that is being processed, or upon the particular setup of the optical fringe projection system that is being used. It is therefore particularly suitable for full algorithmic automation. The accuracy and validity of the suggested method has been tested using both computer-generated and real fringe patterns. This novel algorithm has been proposed for its advantages in terms of computational processing speed as it is the fastest available method to extract the wrapped phase information from a fringe pattern.

  13. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    PubMed

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  14. Multicamera polarized vision for the orientation with the skylight polarization patterns

    NASA Astrophysics Data System (ADS)

    Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Zhang, Lilian; Wang, Yujie

    2018-04-01

    A robust orientation algorithm based on the skylight polarization patterns for the urban ground vehicle is presented. We present the orientation model with the Rayleigh scattering and propose the robust orientation algorithm with the total least square. The proposed algorithm can utilize the whole sky area polarization patterns for realizing a more robust and accurate orientation. To enhance the algorithm's robustness in the urban environment, we develop a real-time method that uses the gradient of the degree of the polarization to remove the obstacles in the polarization image. In addition, our algorithm can solve the ambiguity problem of the polarized orientation without any other sensors. We also conduct a static rotating and a dynamic car experiments to evaluate the algorithm. The results demonstrate that our proposed algorithm can provide an accurate orientation estimation for the ground vehicle in the open and urban environments-the root-mean-square error in the static experiment is 0.28 deg and in the dynamic experiment is 0.81 deg. Finally, we discuss insights gained with respect to further work in optics and robotics.

  15. Probabilistic Common Spatial Patterns for Multichannel EEG Analysis

    PubMed Central

    Chen, Zhe; Gao, Xiaorong; Li, Yuanqing; Brown, Emery N.; Gao, Shangkai

    2015-01-01

    Common spatial patterns (CSP) is a well-known spatial filtering algorithm for multichannel electroencephalogram (EEG) analysis. In this paper, we cast the CSP algorithm in a probabilistic modeling setting. Specifically, probabilistic CSP (P-CSP) is proposed as a generic EEG spatio-temporal modeling framework that subsumes the CSP and regularized CSP algorithms. The proposed framework enables us to resolve the overfitting issue of CSP in a principled manner. We derive statistical inference algorithms that can alleviate the issue of local optima. In particular, an efficient algorithm based on eigendecomposition is developed for maximum a posteriori (MAP) estimation in the case of isotropic noise. For more general cases, a variational algorithm is developed for group-wise sparse Bayesian learning for the P-CSP model and for automatically determining the model size. The two proposed algorithms are validated on a simulated data set. Their practical efficacy is also demonstrated by successful applications to single-trial classifications of three motor imagery EEG data sets and by the spatio-temporal pattern analysis of one EEG data set recorded in a Stroop color naming task. PMID:26005228

  16. A parallel time integrator for noisy nonlinear oscillatory systems

    NASA Astrophysics Data System (ADS)

    Subber, Waad; Sarkar, Abhijit

    2018-06-01

    In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).

  17. A Novel Discrete Differential Evolution Algorithm for the Vehicle Routing Problem in B2C E-Commerce

    NASA Astrophysics Data System (ADS)

    Xia, Chao; Sheng, Ying; Jiang, Zhong-Zhong; Tan, Chunqiao; Huang, Min; He, Yuanjian

    2015-12-01

    In this paper, a novel discrete differential evolution (DDE) algorithm is proposed to solve the vehicle routing problems (VRP) in B2C e-commerce, in which VRP is modeled by the incomplete graph based on the actual urban road system. First, a variant of classical VRP is described and a mathematical programming model for the variant is given. Second, the DDE is presented, where individuals are represented as the sequential encoding scheme, and a novel reparation operator is employed to repair the infeasible solutions. Furthermore, a FLOYD operator for dealing with the shortest route is embedded in the proposed DDE. Finally, an extensive computational study is carried out in comparison with the predatory search algorithm and genetic algorithm, and the results show that the proposed DDE is an effective algorithm for VRP in B2C e-commerce.

  18. Evolutionary pattern search algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimentalmore » analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.« less

  19. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  20. MANUSCRIPT IN PRESS: DEMENTIA & GERIATRIC COGNITIVE DISORDERS

    PubMed Central

    O’Bryant, Sid E.; Xiao, Guanghua; Barber, Robert; Cullum, C. Munro; Weiner, Myron; Hall, James; Edwards, Melissa; Grammas, Paula; Wilhelmsen, Kirk; Doody, Rachelle; Diaz-Arrastia, Ramon

    2015-01-01

    Background Prior work on the link between blood-based biomarkers and cognitive status has largely been based on dichotomous classifications rather than detailed neuropsychological functioning. The current project was designed to create serum-based biomarker algorithms that predict neuropsychological test performance. Methods A battery of neuropsychological measures was administered. Random forest analyses were utilized to create neuropsychological test-specific biomarker risk scores in a training set that were entered into linear regression models predicting the respective test scores in the test set. Serum multiplex biomarker data were analyzed on 108 proteins from 395 participants (197 AD cases and 198 controls) from the Texas Alzheimer’s Research and Care Consortium. Results The biomarker risk scores were significant predictors (p<0.05) of scores on all neuropsychological tests. With the exception of premorbid intellectual status (6.6%), the biomarker risk scores alone accounted for a minimum of 12.9% of the variance in neuropsychological scores. Biomarker algorithms (biomarker risk scores + demographics) accounted for substantially more variance in scores. Review of the variable importance plots indicated differential patterns of biomarker significance for each test, suggesting the possibility of domain-specific biomarker algorithms. Conclusions Our findings provide proof-of-concept for a novel area of scientific discovery, which we term “molecular neuropsychology.” PMID:24107792

  1. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer

    PubMed Central

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy. PMID:29768463

  2. Training radial basis function networks for wind speed prediction using PSO enhanced differential search optimizer.

    PubMed

    Rani R, Hannah Jessie; Victoire T, Aruldoss Albert

    2018-01-01

    This paper presents an integrated hybrid optimization algorithm for training the radial basis function neural network (RBF NN). Training of neural networks is still a challenging exercise in machine learning domain. Traditional training algorithms in general suffer and trap in local optima and lead to premature convergence, which makes them ineffective when applied for datasets with diverse features. Training algorithms based on evolutionary computations are becoming popular due to their robust nature in overcoming the drawbacks of the traditional algorithms. Accordingly, this paper proposes a hybrid training procedure with differential search (DS) algorithm functionally integrated with the particle swarm optimization (PSO). To surmount the local trapping of the search procedure, a new population initialization scheme is proposed using Logistic chaotic sequence, which enhances the population diversity and aid the search capability. To demonstrate the effectiveness of the proposed RBF hybrid training algorithm, experimental analysis on publicly available 7 benchmark datasets are performed. Subsequently, experiments were conducted on a practical application case for wind speed prediction to expound the superiority of the proposed RBF training algorithm in terms of prediction accuracy.

  3. A discrete artificial bee colony algorithm incorporating differential evolution for the flow-shop scheduling problem with blocking

    NASA Astrophysics Data System (ADS)

    Han, Yu-Yan; Gong, Dunwei; Sun, Xiaoyan

    2015-07-01

    A flow-shop scheduling problem with blocking has important applications in a variety of industrial systems but is underrepresented in the research literature. In this study, a novel discrete artificial bee colony (ABC) algorithm is presented to solve the above scheduling problem with a makespan criterion by incorporating the ABC with differential evolution (DE). The proposed algorithm (DE-ABC) contains three key operators. One is related to the employed bee operator (i.e. adopting mutation and crossover operators of discrete DE to generate solutions with good quality); the second is concerned with the onlooker bee operator, which modifies the selected solutions using insert or swap operators based on the self-adaptive strategy; and the last is for the local search, that is, the insert-neighbourhood-based local search with a small probability is adopted to improve the algorithm's capability in exploitation. The performance of the proposed DE-ABC algorithm is empirically evaluated by applying it to well-known benchmark problems. The experimental results show that the proposed algorithm is superior to the compared algorithms in minimizing the makespan criterion.

  4. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  5. Ubiquitousness of link-density and link-pattern communities in real-world networks

    NASA Astrophysics Data System (ADS)

    Šubelj, L.; Bajec, M.

    2012-01-01

    Community structure appears to be an intrinsic property of many complex real-world networks. However, recent work shows that real-world networks reveal even more sophisticated modules than classical cohesive (link-density) communities. In particular, networks can also be naturally partitioned according to similar patterns of connectedness among the nodes, revealing link-pattern communities. We here propose a propagation based algorithm that can extract both link-density and link-pattern communities, without any prior knowledge of the true structure. The algorithm was first validated on different classes of synthetic benchmark networks with community structure, and also on random networks. We have further applied the algorithm to different social, information, technological and biological networks, where it indeed reveals meaningful (composites of) link-density and link-pattern communities. The results thus seem to imply that, similarly as link-density counterparts, link-pattern communities appear ubiquitous in nature and design.

  6. Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data

    PubMed Central

    2010-01-01

    Background In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. Results The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Conclusions Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements. PMID:20205909

  7. Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data.

    PubMed

    Nuel, Gregory; Regad, Leslie; Martin, Juliette; Camproux, Anne-Claude

    2010-01-26

    In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements.

  8. Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns

    PubMed Central

    2013-01-01

    Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected sublinear running time of the presented index-based algorithms, allows for the first time approximate matching of RNA sequence-structure patterns in large sequence databases. Beyond the algorithmic contributions, we provide with RaligNAtor a robust and well documented open-source software package implementing the algorithms presented in this manuscript. The RaligNAtor software is available at http://www.zbh.uni-hamburg.de/ralignator. PMID:23865810

  9. An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.

    PubMed

    Yang, Yifei; Tan, Minjia; Dai, Yuewei

    2017-01-01

    A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.

  10. Simultaneous quaternion estimation (QUEST) and bias determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  11. Solar Radiation-Associated Adaptive SNP Genetic Differentiation in Wild Emmer Wheat, Triticum dicoccoides.

    PubMed

    Ren, Jing; Chen, Liang; Jin, Xiaoli; Zhang, Miaomiao; You, Frank M; Wang, Jirui; Frenkel, Vladimir; Yin, Xuegui; Nevo, Eviatar; Sun, Dongfa; Luo, Ming-Cheng; Peng, Junhua

    2017-01-01

    Whole-genome scans with large number of genetic markers provide the opportunity to investigate local adaptation in natural populations and identify candidate genes under positive selection. In the present study, adaptation genetic differentiation associated with solar radiation was investigated using 695 polymorphic SNP markers in wild emmer wheat originated in a micro-site at Yehudiyya, Israel. The test involved two solar radiation niches: (1) sun, in-between trees; and (2) shade, under tree canopy, separated apart by a distance of 2-4 m. Analysis of molecular variance showed a small (0.53%) but significant portion of overall variation between the sun and shade micro-niches, indicating a non-ignorable genetic differentiation between sun and shade habitats. Fifty SNP markers showed a medium (0.05 ≤ F ST ≤ 0.15) or high genetic differentiation ( F ST > 0.15). A total of 21 outlier loci under positive selection were identified by using four different F ST -outlier testing algorithms. The markers and genome locations under positive selection are consistent with the known patterns of selection. These results suggested that genetic differentiation between sun and shade habitats is substantial, radiation-associated, and therefore ecologically determined. Hence, the results of this study reflected effects of natural selection through solar radiation on EST-related SNP genetic diversity, resulting presumably in different adaptive complexes at a micro-scale divergence. The present work highlights the evolutionary theory and application significance of solar radiation-driven natural selection in wheat improvement.

  12. Fringe pattern information retrieval using wavelets

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Patimo, Caterina; Manicone, Pasquale D.; Lamberti, Luciano

    2005-08-01

    Two-dimensional phase modulation is currently the basic model used in the interpretation of fringe patterns that contain displacement information, moire, holographic interferometry, speckle techniques. Another way to look to these two-dimensional signals is to consider them as frequency modulated signals. This alternative interpretation has practical implications similar to those that exist in radio engineering for handling frequency modulated signals. Utilizing this model it is possible to obtain frequency information by using the energy approach introduced by Ville in 1944. A natural complementary tool of this process is the wavelet methodology. The use of wavelet makes it possible to obtain the local values of the frequency in a one or two dimensional domain without the need of previous phase retrieval and differentiation. Furthermore from the properties of wavelets it is also possible to obtain at the same time the phase of the signal with the advantage of a better noise removal capabilities and the possibility of developing simpler algorithms for phase unwrapping due to the availability of the derivative of the phase.

  13. Differential Cloud Particles Evolution Algorithm Based on Data-Driven Mechanism for Applications of ANN

    PubMed Central

    2017-01-01

    Computational scientists have designed many useful algorithms by exploring a biological process or imitating natural evolution. These algorithms can be used to solve engineering optimization problems. Inspired by the change of matter state, we proposed a novel optimization algorithm called differential cloud particles evolution algorithm based on data-driven mechanism (CPDD). In the proposed algorithm, the optimization process is divided into two stages, namely, fluid stage and solid stage. The algorithm carries out the strategy of integrating global exploration with local exploitation in fluid stage. Furthermore, local exploitation is carried out mainly in solid stage. The quality of the solution and the efficiency of the search are influenced greatly by the control parameters. Therefore, the data-driven mechanism is designed for obtaining better control parameters to ensure good performance on numerical benchmark problems. In order to verify the effectiveness of CPDD, numerical experiments are carried out on all the CEC2014 contest benchmark functions. Finally, two application problems of artificial neural network are examined. The experimental results show that CPDD is competitive with respect to other eight state-of-the-art intelligent optimization algorithms. PMID:28761438

  14. NARMAX model identification of a palm oil biodiesel engine using multi-objective optimization differential evolution

    NASA Astrophysics Data System (ADS)

    Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin

    2017-09-01

    This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.

  15. Using a Search Engine-Based Mutually Reinforcing Approach to Assess the Semantic Relatedness of Biomedical Terms

    PubMed Central

    Hsu, Yi-Yu; Chen, Hung-Yu; Kao, Hung-Yu

    2013-01-01

    Background Determining the semantic relatedness of two biomedical terms is an important task for many text-mining applications in the biomedical field. Previous studies, such as those using ontology-based and corpus-based approaches, measured semantic relatedness by using information from the structure of biomedical literature, but these methods are limited by the small size of training resources. To increase the size of training datasets, the outputs of search engines have been used extensively to analyze the lexical patterns of biomedical terms. Methodology/Principal Findings In this work, we propose the Mutually Reinforcing Lexical Pattern Ranking (ReLPR) algorithm for learning and exploring the lexical patterns of synonym pairs in biomedical text. ReLPR employs lexical patterns and their pattern containers to assess the semantic relatedness of biomedical terms. By combining sentence structures and the linking activities between containers and lexical patterns, our algorithm can explore the correlation between two biomedical terms. Conclusions/Significance The average correlation coefficient of the ReLPR algorithm was 0.82 for various datasets. The results of the ReLPR algorithm were significantly superior to those of previous methods. PMID:24348899

  16. The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations

    PubMed Central

    Mitchell, William F.

    1998-01-01

    Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given. PMID:28009355

  17. The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations.

    PubMed

    Mitchell, William F

    1998-01-01

    Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given.

  18. A hierarchical graph neuron scheme for real-time pattern recognition.

    PubMed

    Nasution, B B; Khan, A I

    2008-02-01

    The hierarchical graph neuron (HGN) implements a single cycle memorization and recall operation through a novel algorithmic design. The HGN is an improvement on the already published original graph neuron (GN) algorithm. In this improved approach, it recognizes incomplete/noisy patterns. It also resolves the crosstalk problem, which is identified in the previous publications, within closely matched patterns. To accomplish this, the HGN links multiple GN networks for filtering noise and crosstalk out of pattern data inputs. Intrinsically, the HGN is a lightweight in-network processing algorithm which does not require expensive floating point computations; hence, it is very suitable for real-time applications and tiny devices such as the wireless sensor networks. This paper describes that the HGN's pattern matching capability and the small response time remain insensitive to the increases in the number of stored patterns. Moreover, the HGN does not require definition of rules or setting of thresholds by the operator to achieve the desired results nor does it require heuristics entailing iterative operations for memorization and recall of patterns.

  19. Pattern recognition and genetic algorithms for discrimination of orange juices and reduction of significant components from headspace solid-phase microextraction.

    PubMed

    Rinaldi, Maurizio; Gindro, Roberto; Barbeni, Massimo; Allegrone, Gianna

    2009-01-01

    Orange (Citrus sinensis L.) juice comprises a complex mixture of volatile components that are difficult to identify and quantify. Classification and discrimination of the varieties on the basis of the volatile composition could help to guarantee the quality of a juice and to detect possible adulteration of the product. To provide information on the amounts of volatile constituents in fresh-squeezed juices from four orange cultivars and to establish suitable discrimination rules to differentiate orange juices using new chemometric approaches. Fresh juices of four orange cultivars were analysed by headspace solid-phase microextraction (HS-SPME) coupled with GC-MS. Principal component analysis, linear discriminant analysis and heuristic methods, such as neural networks, allowed clustering of the data from HS-SPME analysis while genetic algorithms addressed the problem of data reduction. To check the quality of the results the chemometric techniques were also evaluated on a sample. Thirty volatile compounds were identified by HS-SPME and GC-MS analyses and their relative amounts calculated. Differences in composition of orange juice volatile components were observed. The chosen orange cultivars could be discriminated using neural networks, genetic relocation algorithms and linear discriminant analysis. Genetic algorithms applied to the data were also able to detect the most significant compounds. SPME is a useful technique to investigate orange juice volatile composition and a flexible chemometric approach is able to correctly separate the juices.

  20. [Algorithm of toxigenic genetically altered Vibrio cholerae El Tor biovar strain identification].

    PubMed

    Smirnova, N I; Agafonov, D A; Zadnova, S P; Cherkasov, A V; Kutyrev, V V

    2014-01-01

    Development of an algorithm of genetically altered Vibrio cholerae biovar El Tor strai identification that ensures determination of serogroup, serovar and biovar of the studied isolate based on pheno- and genotypic properties, detection of genetically altered cholera El Tor causative agents, their differentiation by epidemic potential as well as evaluation of variability of key pathogenicity genes. Complex analysis of 28 natural V. cholerae strains was carried out by using traditional microbiological methods, PCR and fragmentary sequencing. An algorithm of toxigenic genetically altered V. cholerae biovar El Tor strain identification was developed that includes 4 stages: determination of serogroup, serovar and biovar based on phenotypic properties, confirmation of serogroup and biovar based on molecular-genetic properties determination of strains as genetically altered, differentiation of genetically altered strains by their epidemic potential and detection of ctxB and tcpA key pathogenicity gene polymorphism. The algorithm is based on the use of traditional microbiological methods, PCR and sequencing of gene fragments. The use of the developed algorithm will increase the effectiveness of detection of genetically altered variants of the cholera El Tor causative agent, their differentiation by epidemic potential and will ensure establishment of polymorphism of genes that code key pathogenicity factors for determination of origins of the strains and possible routes of introduction of the infection.

  1. A comparison of independent component analysis algorithms and measures to discriminate between EEG and artifact components.

    PubMed

    Dharmaprani, Dhani; Nguyen, Hoang K; Lewis, Trent W; DeLosAngeles, Dylan; Willoughby, John O; Pope, Kenneth J

    2016-08-01

    Independent Component Analysis (ICA) is a powerful statistical tool capable of separating multivariate scalp electrical signals into their additive independent or source components, specifically EEG or electroencephalogram and artifacts. Although ICA is a widely accepted EEG signal processing technique, classification of the recovered independent components (ICs) is still flawed, as current practice still requires subjective human decisions. Here we build on the results from Fitzgibbon et al. [1] to compare three measures and three ICA algorithms. Using EEG data acquired during neuromuscular paralysis, we tested the ability of the measures (spectral slope, peripherality and spatial smoothness) and algorithms (FastICA, Infomax and JADE) to identify components containing EMG. Spatial smoothness showed differentiation between paralysis and pre-paralysis ICs comparable to spectral slope, whereas peripherality showed less differentiation. A combination of the measures showed better differentiation than any measure alone. Furthermore, FastICA provided the best discrimination between muscle-free and muscle-contaminated recordings in the shortest time, suggesting it may be the most suited to EEG applications of the considered algorithms. Spatial smoothness results suggest that a significant number of ICs are mixed, i.e. contain signals from more than one biological source, and so the development of an ICA algorithm that is optimised to produce ICs that are easily classifiable is warranted.

  2. Differentially Private Empirical Risk Minimization

    PubMed Central

    Chaudhuri, Kamalika; Monteleoni, Claire; Sarwate, Anand D.

    2011-01-01

    Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance. PMID:21892342

  3. Storyline Visualization: A Compelling Way to Understand Patterns over Time and Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2017-10-16

    Storyline visualization is a compelling way to understand patterns over time and space. Much effort has been spent developing efficient and aesthetically pleasing layout optimization algorithms. But what if those algorithms are optimizing the wrong things? To answer this question, we conducted a design study with different storyline layout algorithms. We found that users with our new design principles for storyline visualization outperform existing methods.

  4. Extracting the differential inverse inelastic mean free path and differential surface excitation probability of Tungsten from X-ray photoelectron spectra and electron energy loss spectra

    NASA Astrophysics Data System (ADS)

    Afanas'ev, V. P.; Gryazev, A. S.; Efremenko, D. S.; Kaplya, P. S.; Kuznetcova, A. V.

    2017-12-01

    Precise knowledge of the differential inverse inelastic mean free path (DIIMFP) and differential surface excitation probability (DSEP) of Tungsten is essential for many fields of material science. In this paper, a fitting algorithm is applied for extracting DIIMFP and DSEP from X-ray photoelectron spectra and electron energy loss spectra. The algorithm uses the partial intensity approach as a forward model, in which a spectrum is given as a weighted sum of cross-convolved DIIMFPs and DSEPs. The weights are obtained as solutions of the Riccati and Lyapunov equations derived from the invariant imbedding principle. The inversion algorithm utilizes the parametrization of DIIMFPs and DSEPs on the base of a classical Lorentz oscillator. Unknown parameters of the model are found by using the fitting procedure, which minimizes the residual between measured spectra and forward simulations. It is found that the surface layer of Tungsten contains several sublayers with corresponding Langmuir resonances. The thicknesses of these sublayers are proportional to the periods of corresponding Langmuir oscillations, as predicted by the theory of R.H. Ritchie.

  5. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  6. Frequent statistics of link-layer bit stream data based on AC-IM algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Chenghong; Lei, Yingke; Xu, Yiming

    2017-08-01

    At present, there are many relevant researches on data processing using classical pattern matching and its improved algorithm, but few researches on statistical data of link-layer bit stream. This paper adopts a frequent statistical method of link-layer bit stream data based on AC-IM algorithm for classical multi-pattern matching algorithms such as AC algorithm has high computational complexity, low efficiency and it cannot be applied to binary bit stream data. The method's maximum jump distance of the mode tree is length of the shortest mode string plus 3 in case of no missing? In this paper, theoretical analysis is made on the principle of algorithm construction firstly, and then the experimental results show that the algorithm can adapt to the binary bit stream data environment and extract the frequent sequence more accurately, the effect is obvious. Meanwhile, comparing with the classical AC algorithm and other improved algorithms, AC-IM algorithm has a greater maximum jump distance and less time-consuming.

  7. Computerized analysis of the 12-lead electrocardiogram to identify epicardial ventricular tachycardia exit sites.

    PubMed

    Yokokawa, Miki; Jung, Dae Yon; Joseph, Kim K; Hero, Alfred O; Morady, Fred; Bogun, Frank

    2014-11-01

    Twelve-lead electrocardiogram (ECG) criteria for epicardial ventricular tachycardia (VT) origins have been described. In patients with structural heart disease, the ability to predict an epicardial origin based on QRS morphology is limited and has been investigated only for limited regions in the heart. The purpose of this study was to determine whether a computerized algorithm is able to accurately differentiate epicardial vs endocardial origins of ventricular arrhythmias. Endocardial and epicardial pace-mapping were performed in 43 patients at 3277 sites. The 12-lead ECGs were digitized and analyzed using a mixture of gaussian model (MoG) to assess whether the algorithm was able to identify an epicardial vs endocardial origin of the paced rhythm. The MoG computerized algorithm was compared to algorithms published in prior reports. The computerized algorithm correctly differentiated epicardial vs endocardial pacing sites for 80% of the sites compared to an accuracy of 42% to 66% of other described criteria. The accuracy was higher in patients without structural heart disease than in those with structural heart disease (94% vs 80%, P = .0004) and for right bundle branch block (82%) compared to left bundle branch block morphologies (79%, P = .001). Validation studies showed the accuracy for VT exit sites to be 84%. A computerized algorithm was able to accurately differentiate the majority of epicardial vs endocardial pace-mapping sites. The algorithm is not region specific and performed best in patients without structural heart disease and with VTs having a right bundle branch block morphology. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  8. Novel flowcytometry-based approach of malignant cell detection in body fluids using an automated hematology analyzer.

    PubMed

    Ai, Tomohiko; Tabe, Yoko; Takemura, Hiroyuki; Kimura, Konobu; Takahashi, Toshihiro; Yang, Haeun; Tsuchiya, Koji; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Ohsaka, Akimichi

    2018-01-01

    Morphological microscopic examinations of nucleated cells in body fluid (BF) samples are performed to screen malignancy. However, the morphological differentiation is time-consuming and labor-intensive. This study aimed to develop a new flowcytometry-based gating analysis mode "XN-BF gating algorithm" to detect malignant cells using an automated hematology analyzer, Sysmex XN-1000. XN-BF mode was equipped with WDF white blood cell (WBC) differential channel. We added two algorithms to the WDF channel: Rule 1 detects larger and clumped cell signals compared to the leukocytes, targeting the clustered malignant cells; Rule 2 detects middle sized mononuclear cells containing less granules than neutrophils with similar fluorescence signal to monocytes, targeting hematological malignant cells and solid tumor cells. BF samples that meet, at least, one rule were detected as malignant. To evaluate this novel gating algorithm, 92 various BF samples were collected. Manual microscopic differentiation with the May-Grunwald Giemsa stain and WBC count with hemocytometer were also performed. The performance of these three methods were evaluated by comparing with the cytological diagnosis. The XN-BF gating algorithm achieved sensitivity of 63.0% and specificity of 87.8% with 68.0% for positive predictive value and 85.1% for negative predictive value in detecting malignant-cell positive samples. Manual microscopic WBC differentiation and WBC count demonstrated 70.4% and 66.7% of sensitivities, and 96.9% and 92.3% of specificities, respectively. The XN-BF gating algorithm can be a feasible tool in hematology laboratories for prompt screening of malignant cells in various BF samples.

  9. Assessment of the information content of patterns: an algorithm

    NASA Astrophysics Data System (ADS)

    Daemi, M. Farhang; Beurle, R. L.

    1991-12-01

    A preliminary investigation confirmed the possibility of assessing the translational and rotational information content of simple artificial images. The calculation is tedious, and for more realistic patterns it is essential to implement the method on a computer. This paper describes an algorithm developed for this purpose which confirms the results of the preliminary investigation. Use of the algorithm facilitates much more comprehensive analysis of the combined effect of continuous rotation and fine translation, and paves the way for analysis of more realistic patterns. Owing to the volume of calculation involved in these algorithms, extensive computing facilities were necessary. The major part of the work was carried out using an ICL 3900 series mainframe computer as well as other powerful workstations such as a RISC architecture MIPS machine.

  10. Speckle reduction of OCT images using an adaptive cluster-based filtering

    NASA Astrophysics Data System (ADS)

    Adabi, Saba; Rashedi, Elaheh; Conforto, Silvia; Mehregan, Darius; Xu, Qiuyun; Nasiriavanaki, Mohammadreza

    2017-02-01

    Optical coherence tomography (OCT) has become a favorable device in the dermatology discipline due to its moderate resolution and penetration depth. OCT images however contain grainy pattern, called speckle, due to the broadband source that has been used in the configuration of OCT. So far, a variety of filtering techniques is introduced to reduce speckle in OCT images. Most of these methods are generic and can be applied to OCT images of different tissues. In this paper, we present a method for speckle reduction of OCT skin images. Considering the architectural structure of skin layers, it seems that a skin image can benefit from being segmented in to differentiable clusters, and being filtered separately in each cluster by using a clustering method and filtering methods such as Wiener. The proposed algorithm was tested on an optical solid phantom with predetermined optical properties. The algorithm was also tested on healthy skin images. The results show that the cluster-based filtering method can reduce the speckle and increase the signal-to-noise ratio and contrast while preserving the edges in the image.

  11. Optimized Hyper Beamforming of Linear Antenna Arrays Using Collective Animal Behaviour

    PubMed Central

    Ram, Gopi; Mandal, Durbadal; Kar, Rajib; Ghoshal, Sakti Prasad

    2013-01-01

    A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB. PMID:23970843

  12. Automatic Clustering Using FSDE-Forced Strategy Differential Evolution

    NASA Astrophysics Data System (ADS)

    Yasid, A.

    2018-01-01

    Clustering analysis is important in datamining for unsupervised data, cause no adequate prior knowledge. One of the important tasks is defining the number of clusters without user involvement that is known as automatic clustering. This study intends on acquiring cluster number automatically utilizing forced strategy differential evolution (AC-FSDE). Two mutation parameters, namely: constant parameter and variable parameter are employed to boost differential evolution performance. Four well-known benchmark datasets were used to evaluate the algorithm. Moreover, the result is compared with other state of the art automatic clustering methods. The experiment results evidence that AC-FSDE is better or competitive with other existing automatic clustering algorithm.

  13. Analytical review of 664 cases of penetrating buttock trauma

    PubMed Central

    2011-01-01

    A comprehensive review of data has not yet been provided as penetrating injury to the buttock is not a common condition accounting for 2-3% of all penetrating injuries. The aim of the study is to provide the as yet lacking analytical review of the literature on penetrating trauma to the buttock, with appraisal of characteristics, features, outcomes, and patterns of major injuries. Based on these results we will provide an algorithm. Using a set of terms we searched the databases Pub Med, EMBASE, Cochran, and CINAHL for articles published in English between 1970 and 2010. We analysed cumulative data from prospective and retrospective studies, and case reports. The literature search revealed 36 relevant articles containing data on 664 patients. There was no grade A evidence found. The injury population mostly consists of young males (95.4%) with a high proportion missile injury (75.9%). Bleeding was found to be the key problem which mostly occurs from internal injury and results in shock in 10%. Overall mortality is 2.9% with significant adverse impact of visceral or vascular injury and shock (P < 0.001). The major injury pattern significantly varies between shot and stab injury with small bowel, colon, or rectum injuries leading in shot wounds, whilst vascular injury leads in stab wounds (P < 0.01). Laparotomy was required in 26.9% of patients. Wound infection, sepsis or multiorgan failure, small bowel fistula, ileus, rebleeding, focal neurologic deficit, and urinary tract infection were the most common complications. Sharp differences in injury pattern endorse an algorithm for differential therapy of penetrating buttock trauma. In conclusion, penetrating buttock trauma should be regarded as a life-threatening injury with impact beyond the pelvis until proven otherwise. PMID:21995834

  14. Accurate Singular Values and Differential QD Algorithms

    DTIC Science & Technology

    1992-07-01

    of the Cholesky Algorithm 5 4 The Quotient Difference Algorithm 8 5 Incorporation of Shifts 11 5.1 Shifted qd Algorithms...Effects of Finite Precision 18 7.1 Error Analysis - Overview ........ ........................... 18 7.2 High Relative Accuracy in the Presence of...showing that it was preferable to replace the DK zero-shift QR transform by two steps of zero-shift LR implemented in a qd (quotient- difference ) format

  15. Volumetric display containing multiple two-dimensional color motion pictures

    NASA Astrophysics Data System (ADS)

    Hirayama, R.; Shiraki, A.; Nakayama, H.; Kakue, T.; Shimobaba, T.; Ito, T.

    2014-06-01

    We have developed an algorithm which can record multiple two-dimensional (2-D) gradated projection patterns in a single three-dimensional (3-D) object. Each recorded pattern has the individual projected direction and can only be seen from the direction. The proposed algorithm has two important features: the number of recorded patterns is theoretically infinite and no meaningful pattern can be seen outside of the projected directions. In this paper, we expanded the algorithm to record multiple 2-D projection patterns in color. There are two popular ways of color mixing: additive one and subtractive one. Additive color mixing used to mix light is based on RGB colors and subtractive color mixing used to mix inks is based on CMY colors. We made two coloring methods based on the additive mixing and subtractive mixing. We performed numerical simulations of the coloring methods, and confirmed their effectiveness. We also fabricated two types of volumetric display and applied the proposed algorithm to them. One is a cubic displays constructed by light-emitting diodes (LEDs) in 8×8×8 array. Lighting patterns of LEDs are controlled by a microcomputer board. The other one is made of 7×7 array of threads. Each thread is illuminated by a projector connected with PC. As a result of the implementation, we succeeded in recording multiple 2-D color motion pictures in the volumetric displays. Our algorithm can be applied to digital signage, media art and so forth.

  16. Backup Attitude Control Algorithms for the MAP Spacecraft

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Andrews, Stephen F.; Ericsson-Jackson, Aprille J.; Flatley, Thomas W.; Ward, David K.; Bay, P. Michael

    1999-01-01

    The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The MAP spacecraft will perform its mission, studying the early origins of the universe, in a Lissajous orbit around the Earth-Sun L(sub 2) Lagrange point. Due to limited mass, power, and financial resources, a traditional reliability concept involving fully redundant components was not feasible. This paper will discuss the redundancy philosophy used on MAP, describe the hardware redundancy selected (and why), and present backup modes and algorithms that were designed in lieu of additional attitude control hardware redundancy to improve the odds of mission success. Three of these modes have been implemented in the spacecraft flight software. The first onboard mode allows the MAP Kalman filter to be used with digital sun sensor (DSS) derived rates, in case of the failure of one of MAP's two two-axis inertial reference units. Similarly, the second onboard mode allows a star tracker only mode, using attitude and derived rate from one or both of MAP's star trackers for onboard attitude determination and control. The last backup mode onboard allows a sun-line angle offset to be commanded that will allow solar radiation pressure to be used for momentum management and orbit stationkeeping. In addition to the backup modes implemented on the spacecraft, two backup algorithms have been developed in the event of less likely contingencies. One of these is an algorithm for implementing an alternative scan pattern to MAP's nominal dual-spin science mode using only one or two reaction wheels and thrusters. Finally, an algorithm has been developed that uses thruster one shots while in science mode for momentum management. This algorithm has been developed in case system momentum builds up faster than anticipated, to allow adequate momentum management while minimizing interruptions to science. In this paper, each mode and algorithm will be discussed, and simulation results presented.

  17. Differential principal component analysis of ChIP-seq.

    PubMed

    Ji, Hongkai; Li, Xia; Wang, Qian-fei; Ning, Yang

    2013-04-23

    We propose differential principal component analysis (dPCA) for analyzing multiple ChIP-sequencing datasets to identify differential protein-DNA interactions between two biological conditions. dPCA integrates unsupervised pattern discovery, dimension reduction, and statistical inference into a single framework. It uses a small number of principal components to summarize concisely the major multiprotein synergistic differential patterns between the two conditions. For each pattern, it detects and prioritizes differential genomic loci by comparing the between-condition differences with the within-condition variation among replicate samples. dPCA provides a unique tool for efficiently analyzing large amounts of ChIP-sequencing data to study dynamic changes of gene regulation across different biological conditions. We demonstrate this approach through analyses of differential chromatin patterns at transcription factor binding sites and promoters as well as allele-specific protein-DNA interactions.

  18. Real-time particulate mass measurement based on laser scattering

    NASA Astrophysics Data System (ADS)

    Rentz, Julia H.; Mansur, David; Vaillancourt, Robert; Schundler, Elizabeth; Evans, Thomas

    2005-11-01

    OPTRA has developed a new approach to the determination of particulate size distribution from a measured, composite, laser angular scatter pattern. Drawing from the field of infrared spectroscopy, OPTRA has employed a multicomponent analysis technique which uniquely recognizes patterns associated with each particle size "bin" over a broad range of sizes. The technique is particularly appropriate for overlapping patterns where large signals are potentially obscuring weak ones. OPTRA has also investigated a method for accurately training the algorithms without the use of representative particles for any given application. This streamlined calibration applies a one-time measured "instrument function" to theoretical Mie patterns to create the training data for the algorithms. OPTRA has demonstrated this algorithmic technique on a compact, rugged, laser scatter sensor head we developed for gas turbine engine emissions measurements. The sensor contains a miniature violet solid state laser and an array of silicon photodiodes, both of which are commercial off the shelf. The algorithmic technique can also be used with any commercially available laser scatter system.

  19. On the numeric integration of dynamic attitude equations

    NASA Technical Reports Server (NTRS)

    Crouch, P. E.; Yan, Y.; Grossman, Robert

    1992-01-01

    We describe new types of numerical integration algorithms developed by the authors. The main aim of the algorithms is to numerically integrate differential equations which evolve on geometric objects, such as the rotation group. The algorithms provide iterates which lie on the prescribed geometric object, either exactly, or to some prescribed accuracy, independent of the order of the algorithm. This paper describes applications of these algorithms to the evolution of the attitude of a rigid body.

  20. Explicit expressions for meromorphic solutions of autonomous nonlinear ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Demina, Maria V.; Kudryashov, Nikolay A.

    2011-03-01

    Meromorphic solutions of autonomous nonlinear ordinary differential equations are studied. An algorithm for constructing meromorphic solutions in explicit form is presented. General expressions for meromorphic solutions (including rational, periodic, elliptic) are found for a wide class of autonomous nonlinear ordinary differential equations.

  1. The value of electrocardiography for differential diagnosis in wide QRS complex tachycardia.

    PubMed

    Sousa, Pedro A; Pereira, Salomé; Candeias, Rui; de Jesus, Ilídio

    2014-03-01

    Correct diagnosis in wide QRS complex tachycardia remains a challenge. Differential diagnosis between ventricular and supraventricular tachycardia has important therapeutic and prognostic implications, and although data from clinical history and physical examination may suggest a particular origin, it is the 12-lead surface electrocardiogram that usually enables this differentiation. Since 1978, various electrocardiographic criteria have been proposed for the differential diagnosis of wide complex tachycardias, particularly the presence of atrioventricular dissociation, and the axis, duration and morphology of QRS complexes. Despite the wide variety of criteria, diagnosis is still often difficult, and errors can have serious consequences. To reduce such errors, several differential diagnosis algorithms have been proposed since 1991. However, in a small percentage of wide QRS tachycardias the diagnosis remains uncertain and in these the wisest decision is to treat them as ventricular tachycardias. The authors' objective was to review the main electrocardiographic criteria and differential diagnosis algorithms of wide QRS tachycardia. Copyright © 2012 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.

  2. Differential Evolution algorithm applied to FSW model calibration

    NASA Astrophysics Data System (ADS)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  3. Spatially Invariant Vector Quantization: A pattern matching algorithm for multiple classes of image subject matter including pathology.

    PubMed

    Hipp, Jason D; Cheng, Jerome Y; Toner, Mehmet; Tompkins, Ronald G; Balis, Ulysses J

    2011-02-26

    HISTORICALLY, EFFECTIVE CLINICAL UTILIZATION OF IMAGE ANALYSIS AND PATTERN RECOGNITION ALGORITHMS IN PATHOLOGY HAS BEEN HAMPERED BY TWO CRITICAL LIMITATIONS: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their use into clinical workflow, as a turnkey solution. We anticipate that SIVQ, and other related class-independent pattern recognition algorithms, will become part of the overall armamentarium of digital image analysis approaches that are immediately available to practicing pathologists, without the need for the immediate availability of an image analysis expert.

  4. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.

  5. Antenna analysis using neural networks

    NASA Astrophysics Data System (ADS)

    Smith, William T.

    1992-09-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).

  6. Diametrical clustering for identifying anti-correlated gene clusters.

    PubMed

    Dhillon, Inderjit S; Marcotte, Edward M; Roshan, Usman

    2003-09-01

    Clustering genes based upon their expression patterns allows us to predict gene function. Most existing clustering algorithms cluster genes together when their expression patterns show high positive correlation. However, it has been observed that genes whose expression patterns are strongly anti-correlated can also be functionally similar. Biologically, this is not unintuitive-genes responding to the same stimuli, regardless of the nature of the response, are more likely to operate in the same pathways. We present a new diametrical clustering algorithm that explicitly identifies anti-correlated clusters of genes. Our algorithm proceeds by iteratively (i). re-partitioning the genes and (ii). computing the dominant singular vector of each gene cluster; each singular vector serving as the prototype of a 'diametric' cluster. We empirically show the effectiveness of the algorithm in identifying diametrical or anti-correlated clusters. Testing the algorithm on yeast cell cycle data, fibroblast gene expression data, and DNA microarray data from yeast mutants reveals that opposed cellular pathways can be discovered with this method. We present systems whose mRNA expression patterns, and likely their functions, oppose the yeast ribosome and proteosome, along with evidence for the inverse transcriptional regulation of a number of cellular systems.

  7. A Genetic Algorithm That Exchanges Neighboring Centers for Fuzzy c-Means Clustering

    ERIC Educational Resources Information Center

    Chahine, Firas Safwan

    2012-01-01

    Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major…

  8. Arterial cannula shape optimization by means of the rotational firefly algorithm

    NASA Astrophysics Data System (ADS)

    Tesch, K.; Kaczorowska, K.

    2016-03-01

    This article presents global optimization results of arterial cannula shapes by means of the newly modified firefly algorithm. The search for the optimal arterial cannula shape is necessary in order to minimize losses and prepare the flow that leaves the circulatory support system of a ventricle (i.e. blood pump) before it reaches the heart. A modification of the standard firefly algorithm, the so-called rotational firefly algorithm, is introduced. It is shown that the rotational firefly algorithm allows for better exploration of search spaces which results in faster convergence and better solutions in comparison with its standard version. This is particularly pronounced for smaller population sizes. Furthermore, it maintains greater diversity of populations for a longer time. A small population size and a low number of iterations are necessary to keep to a minimum the computational cost of the objective function of the problem, which comes from numerical solution of the nonlinear partial differential equations. Moreover, both versions of the firefly algorithm are compared to the state of the art, namely the differential evolution and covariance matrix adaptation evolution strategies.

  9. Algorithms of walking and stability for an anthropomorphic robot

    NASA Astrophysics Data System (ADS)

    Sirazetdinov, R. T.; Devaev, V. M.; Nikitina, D. V.; Fadeev, A. Y.; Kamalov, A. R.

    2017-09-01

    Autonomous movement of an anthropomorphic robot is considered as a superposition of a set of typical elements of movement - so-called patterns, each of which can be considered as an agent of some multi-agent system [ 1 ]. To control the AP-601 robot, an information and communication infrastructure has been created that represents some multi-agent system that allows the development of algorithms for individual patterns of moving and run them in the system as a set of independently executed and interacting agents. The algorithms of lateral movement of the anthropomorphic robot AP-601 series with active stability due to the stability pattern are presented.

  10. Hierarchical Self Assembly of Patterns from the Robinson Tilings: DNA Tile Design in an Enhanced Tile Assembly Model

    PubMed Central

    Padilla, Jennifer E.; Liu, Wenyan; Seeman, Nadrian C.

    2012-01-01

    We introduce a hierarchical self assembly algorithm that produces the quasiperiodic patterns found in the Robinson tilings and suggest a practical implementation of this algorithm using DNA origami tiles. We modify the abstract Tile Assembly Model, (aTAM), to include active signaling and glue activation in response to signals to coordinate the hierarchical assembly of Robinson patterns of arbitrary size from a small set of tiles according to the tile substitution algorithm that generates them. Enabling coordinated hierarchical assembly in the aTAM makes possible the efficient encoding of the recursive process of tile substitution. PMID:23226722

  11. Hierarchical Self Assembly of Patterns from the Robinson Tilings: DNA Tile Design in an Enhanced Tile Assembly Model.

    PubMed

    Padilla, Jennifer E; Liu, Wenyan; Seeman, Nadrian C

    2012-06-01

    We introduce a hierarchical self assembly algorithm that produces the quasiperiodic patterns found in the Robinson tilings and suggest a practical implementation of this algorithm using DNA origami tiles. We modify the abstract Tile Assembly Model, (aTAM), to include active signaling and glue activation in response to signals to coordinate the hierarchical assembly of Robinson patterns of arbitrary size from a small set of tiles according to the tile substitution algorithm that generates them. Enabling coordinated hierarchical assembly in the aTAM makes possible the efficient encoding of the recursive process of tile substitution.

  12. Evaluation of stochastic differential equation approximation of ion channel gating models.

    PubMed

    Bruce, Ian C

    2009-04-01

    Fox and Lu derived an algorithm based on stochastic differential equations for approximating the kinetics of ion channel gating that is simpler and faster than "exact" algorithms for simulating Markov process models of channel gating. However, the approximation may not be sufficiently accurate to predict statistics of action potential generation in some cases. The objective of this study was to develop a framework for analyzing the inaccuracies and determining their origin. Simulations of a patch of membrane with voltage-gated sodium and potassium channels were performed using an exact algorithm for the kinetics of channel gating and the approximate algorithm of Fox & Lu. The Fox & Lu algorithm assumes that channel gating particle dynamics have a stochastic term that is uncorrelated, zero-mean Gaussian noise, whereas the results of this study demonstrate that in many cases the stochastic term in the Fox & Lu algorithm should be correlated and non-Gaussian noise with a non-zero mean. The results indicate that: (i) the source of the inaccuracy is that the Fox & Lu algorithm does not adequately describe the combined behavior of the multiple activation particles in each sodium and potassium channel, and (ii) the accuracy does not improve with increasing numbers of channels.

  13. Second kind Chebyshev operational matrix algorithm for solving differential equations of Lane-Emden type

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Abd-Elhameed, W. M.; Youssri, Y. H.

    2013-10-01

    In this paper, we present a new second kind Chebyshev (S2KC) operational matrix of derivatives. With the aid of S2KC, an algorithm is described to obtain numerical solutions of a class of linear and nonlinear Lane-Emden type singular initial value problems (IVPs). The idea of obtaining such solutions is essentially based on reducing the differential equation with its initial conditions to a system of algebraic equations. Two illustrative examples concern relevant physical problems (the Lane-Emden equations of the first and second kind) are discussed to demonstrate the validity and applicability of the suggested algorithm. Numerical results obtained are comparing favorably with the analytical known solutions.

  14. A generalized approach to automated NMR peak list editing: application to reduced dimensionality triple resonance spectra.

    PubMed

    Moseley, Hunter N B; Riaz, Nadeem; Aramini, James M; Szyperski, Thomas; Montelione, Gaetano T

    2004-10-01

    We present an algorithm and program called Pattern Picker that performs editing of raw peak lists derived from multidimensional NMR experiments with characteristic peak patterns. Pattern Picker detects groups of correlated peaks within peak lists from reduced dimensionality triple resonance (RD-TR) NMR spectra, with high fidelity and high yield. With typical quality RD-TR NMR data sets, Pattern Picker performs almost as well as human analysis, and is very robust in discriminating real peak sets from noise and other artifacts in unedited peak lists. The program uses a depth-first search algorithm with short-circuiting to efficiently explore a search tree representing every possible combination of peaks forming a group. The Pattern Picker program is particularly valuable for creating an automated peak picking/editing process. The Pattern Picker algorithm can be applied to a broad range of experiments with distinct peak patterns including RD, G-matrix Fourier transformation (GFT) NMR spectra, and experiments to measure scalar and residual dipolar coupling, thus promoting the use of experiments that are typically harder for a human to analyze. Since the complexity of peak patterns becomes a benefit rather than a drawback, Pattern Picker opens new opportunities in NMR experiment design.

  15. Patterns of brain structural connectivity differentiate normal weight from overweight subjects

    PubMed Central

    Gupta, Arpana; Mayer, Emeran A.; Sanmiguel, Claudia P.; Van Horn, John D.; Woodworth, Davis; Ellingson, Benjamin M.; Fling, Connor; Love, Aubrey; Tillisch, Kirsten; Labus, Jennifer S.

    2015-01-01

    Background Alterations in the hedonic component of ingestive behaviors have been implicated as a possible risk factor in the pathophysiology of overweight and obese individuals. Neuroimaging evidence from individuals with increasing body mass index suggests structural, functional, and neurochemical alterations in the extended reward network and associated networks. Aim To apply a multivariate pattern analysis to distinguish normal weight and overweight subjects based on gray and white-matter measurements. Methods Structural images (N = 120, overweight N = 63) and diffusion tensor images (DTI) (N = 60, overweight N = 30) were obtained from healthy control subjects. For the total sample the mean age for the overweight group (females = 32, males = 31) was 28.77 years (SD = 9.76) and for the normal weight group (females = 32, males = 25) was 27.13 years (SD = 9.62). Regional segmentation and parcellation of the brain images was performed using Freesurfer. Deterministic tractography was performed to measure the normalized fiber density between regions. A multivariate pattern analysis approach was used to examine whether brain measures can distinguish overweight from normal weight individuals. Results 1. White-matter classification: The classification algorithm, based on 2 signatures with 17 regional connections, achieved 97% accuracy in discriminating overweight individuals from normal weight individuals. For both brain signatures, greater connectivity as indexed by increased fiber density was observed in overweight compared to normal weight between the reward network regions and regions of the executive control, emotional arousal, and somatosensory networks. In contrast, the opposite pattern (decreased fiber density) was found between ventromedial prefrontal cortex and the anterior insula, and between thalamus and executive control network regions. 2. Gray-matter classification: The classification algorithm, based on 2 signatures with 42 morphological features, achieved 69% accuracy in discriminating overweight from normal weight. In both brain signatures regions of the reward, salience, executive control and emotional arousal networks were associated with lower morphological values in overweight individuals compared to normal weight individuals, while the opposite pattern was seen for regions of the somatosensory network. Conclusions 1. An increased BMI (i.e., overweight subjects) is associated with distinct changes in gray-matter and fiber density of the brain. 2. Classification algorithms based on white-matter connectivity involving regions of the reward and associated networks can identify specific targets for mechanistic studies and future drug development aimed at abnormal ingestive behavior and in overweight/obesity. PMID:25737959

  16. Binary Classification using Decision Tree based Genetic Programming and Its Application to Analysis of Bio-mass Data

    NASA Astrophysics Data System (ADS)

    To, Cuong; Pham, Tuan D.

    2010-01-01

    In machine learning, pattern recognition may be the most popular task. "Similar" patterns identification is also very important in biology because first, it is useful for prediction of patterns associated with disease, for example cancer tissue (normal or tumor); second, similarity or dissimilarity of the kinetic patterns is used to identify coordinately controlled genes or proteins involved in the same regulatory process. Third, similar genes (proteins) share similar functions. In this paper, we present an algorithm which uses genetic programming to create decision tree for binary classification problem. The application of the algorithm was implemented on five real biological databases. Base on the results of comparisons with well-known methods, we see that the algorithm is outstanding in most of cases.

  17. High-order Newton-penalty algorithms

    NASA Astrophysics Data System (ADS)

    Dussault, Jean-Pierre

    2005-10-01

    Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.

  18. Validating Machine Learning Algorithms for Twitter Data Against Established Measures of Suicidality.

    PubMed

    Braithwaite, Scott R; Giraud-Carrier, Christophe; West, Josh; Barnes, Michael D; Hanson, Carl Lee

    2016-05-16

    One of the leading causes of death in the United States (US) is suicide and new methods of assessment are needed to track its risk in real time. Our objective is to validate the use of machine learning algorithms for Twitter data against empirically validated measures of suicidality in the US population. Using a machine learning algorithm, the Twitter feeds of 135 Mechanical Turk (MTurk) participants were compared with validated, self-report measures of suicide risk. Our findings show that people who are at high suicidal risk can be easily differentiated from those who are not by machine learning algorithms, which accurately identify the clinically significant suicidal rate in 92% of cases (sensitivity: 53%, specificity: 97%, positive predictive value: 75%, negative predictive value: 93%). Machine learning algorithms are efficient in differentiating people who are at a suicidal risk from those who are not. Evidence for suicidality can be measured in nonclinical populations using social media data.

  19. Validating Machine Learning Algorithms for Twitter Data Against Established Measures of Suicidality

    PubMed Central

    2016-01-01

    Background One of the leading causes of death in the United States (US) is suicide and new methods of assessment are needed to track its risk in real time. Objective Our objective is to validate the use of machine learning algorithms for Twitter data against empirically validated measures of suicidality in the US population. Methods Using a machine learning algorithm, the Twitter feeds of 135 Mechanical Turk (MTurk) participants were compared with validated, self-report measures of suicide risk. Results Our findings show that people who are at high suicidal risk can be easily differentiated from those who are not by machine learning algorithms, which accurately identify the clinically significant suicidal rate in 92% of cases (sensitivity: 53%, specificity: 97%, positive predictive value: 75%, negative predictive value: 93%). Conclusions Machine learning algorithms are efficient in differentiating people who are at a suicidal risk from those who are not. Evidence for suicidality can be measured in nonclinical populations using social media data. PMID:27185366

  20. Historical feature pattern extraction based network attack situation sensing algorithm.

    PubMed

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.

  1. Historical Feature Pattern Extraction Based Network Attack Situation Sensing Algorithm

    PubMed Central

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously. PMID:24892054

  2. A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight

    NASA Technical Reports Server (NTRS)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  3. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    NASA Technical Reports Server (NTRS)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  4. An algorithmic approach to the brain biopsy--part I.

    PubMed

    Kleinschmidt-DeMasters, B K; Prayson, Richard A

    2006-11-01

    The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part I, we assist the resident in learning how to handle brain biopsies in general and how to distinguish nonneoplastic lesions that mimic tumors from true neoplasms.

  5. An algorithmic approach to the brain biopsy--part II.

    PubMed

    Prayson, Richard A; Kleinschmidt-DeMasters, B K

    2006-11-01

    The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part II, we assist the resident in arriving at the correct diagnosis for neuropathologic lesions containing granulomatous inflammation, macrophages, or abnormal blood vessels.

  6. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  7. Introduction to Numerical Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoonover, Joseph A.

    2016-06-14

    These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.

  8. Control chart pattern recognition using RBF neural network with new training algorithm and practical features.

    PubMed

    Addeh, Abdoljalil; Khormali, Aminollah; Golilarz, Noorbakhsh Amiri

    2018-05-04

    The control chart patterns are the most commonly used statistical process control (SPC) tools to monitor process changes. When a control chart produces an out-of-control signal, this means that the process has been changed. In this study, a new method based on optimized radial basis function neural network (RBFNN) is proposed for control chart patterns (CCPs) recognition. The proposed method consists of four main modules: feature extraction, feature selection, classification and learning algorithm. In the feature extraction module, shape and statistical features are used. Recently, various shape and statistical features have been presented for the CCPs recognition. In the feature selection module, the association rules (AR) method has been employed to select the best set of the shape and statistical features. In the classifier section, RBFNN is used and finally, in RBFNN, learning algorithm has a high impact on the network performance. Therefore, a new learning algorithm based on the bees algorithm has been used in the learning module. Most studies have considered only six patterns: Normal, Cyclic, Increasing Trend, Decreasing Trend, Upward Shift and Downward Shift. Since three patterns namely Normal, Stratification, and Systematic are very similar to each other and distinguishing them is very difficult, in most studies Stratification and Systematic have not been considered. Regarding to the continuous monitoring and control over the production process and the exact type detection of the problem encountered during the production process, eight patterns have been investigated in this study. The proposed method is tested on a dataset containing 1600 samples (200 samples from each pattern) and the results showed that the proposed method has a very good performance. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Fuzzy automata and pattern matching

    NASA Technical Reports Server (NTRS)

    Setzer, C. B.; Warsi, N. A.

    1986-01-01

    A wide-ranging search for articles and books concerned with fuzzy automata and syntactic pattern recognition is presented. A number of survey articles on image processing and feature detection were included. Hough's algorithm is presented to illustrate the way in which knowledge about an image can be used to interpret the details of the image. It was found that in hand generated pictures, the algorithm worked well on following the straight lines, but had great difficulty turning corners. An algorithm was developed which produces a minimal finite automaton recognizing a given finite set of strings. One difficulty of the construction is that, in some cases, this minimal automaton is not unique for a given set of strings and a given maximum length. This algorithm compares favorably with other inference algorithms. More importantly, the algorithm produces an automaton with a rigorously described relationship to the original set of strings that does not depend on the algorithm itself.

  10. Performance of thermal deposition and mass flux condition on bioconvection nanoparticles containing gyrotactic microorganisms

    NASA Astrophysics Data System (ADS)

    Iqbal, Z.; Ahmad, Bilal

    2017-11-01

    This is an attempt to investigate the influence of thermal radiation on the movement of motile gyrotactic microorganisms submerged in a water-based nanofluid flow over a nonlinear stretching sheet. The mathematical modeling of this physical problem leads to a system of nonlinear coupled partial differential equations. The problem is tackled by converting nonlinear partial differential equations into the system of highly nonlinear ordinary differential equations. The resulting nonlinear equations of momentum, energy, concentration of nanoparticles and motile gyrotactic microorganisms along with the mass flux condition are solved numerically by means of a shooting algorithm. The effects of the involved physical parameters of interest are discussed graphically. The values of the skin friction coefficient, Nusselt number, Sherwood number and local density number of motile microorganisms are tabulated for detailed analysis on the flow pattern at the stretching surface. It is concluded that the nanofluid temperature is an increasing function of the thermal radiation and the Biot number parameter. An opposite trend is observed for the local Nusselt number. The association with the preceding results in limiting sense is shown as well. A tremendous agreement of the current study in a restrictive manner is achieved as well. In addition, flow configurations through stream functions are presented and deliberated significantly.

  11. Analyte species and concentration identification using differentially functionalized microcantilever arrays and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senesac, Larry R; Datskos, Panos G; Sepaniak, Michael J

    2006-01-01

    In the present work, we have performed analyte species and concentration identification using an array of ten differentially functionalized microcantilevers coupled with a back-propagation artificial neural network pattern recognition algorithm. The array consists of ten nanostructured silicon microcantilevers functionalized by polymeric and gas chromatography phases and macrocyclic receptors as spatially dense, differentially responding sensing layers for identification and quantitation of individual analyte(s) and their binary mixtures. The array response (i.e. cantilever bending) to analyte vapor was measured by an optical readout scheme and the responses were recorded for a selection of individual analytes as well as several binary mixtures. Anmore » artificial neural network (ANN) was designed and trained to recognize not only the individual analytes and binary mixtures, but also to determine the concentration of individual components in a mixture. To the best of our knowledge, ANNs have not been applied to microcantilever array responses previously to determine concentrations of individual analytes. The trained ANN correctly identified the eleven test analyte(s) as individual components, most with probabilities greater than 97%, whereas it did not misidentify an unknown (untrained) analyte. Demonstrated unique aspects of this work include an ability to measure binary mixtures and provide both qualitative (identification) and quantitative (concentration) information with array-ANN-based sensor methodologies.« less

  12. Algorithm for real-time detection of signal patterns using phase synchrony: an application to an electrode array

    NASA Astrophysics Data System (ADS)

    Sadeghi, Saman; MacKay, William A.; van Dam, R. Michael; Thompson, Michael

    2011-02-01

    Real-time analysis of multi-channel spatio-temporal sensor data presents a considerable technical challenge for a number of applications. For example, in brain-computer interfaces, signal patterns originating on a time-dependent basis from an array of electrodes on the scalp (i.e. electroencephalography) must be analyzed in real time to recognize mental states and translate these to commands which control operations in a machine. In this paper we describe a new technique for recognition of spatio-temporal patterns based on performing online discrimination of time-resolved events through the use of correlation of phase dynamics between various channels in a multi-channel system. The algorithm extracts unique sensor signature patterns associated with each event during a training period and ranks importance of sensor pairs in order to distinguish between time-resolved stimuli to which the system may be exposed during real-time operation. We apply the algorithm to electroencephalographic signals obtained from subjects tested in the neurophysiology laboratories at the University of Toronto. The extension of this algorithm for rapid detection of patterns in other sensing applications, including chemical identification via chemical or bio-chemical sensor arrays, is also discussed.

  13. Function Clustering Self-Organization Maps (FCSOMs) for mining differentially expressed genes in Drosophila and its correlation with the growth medium.

    PubMed

    Liu, L L; Liu, M J; Ma, M

    2015-09-28

    The central task of this study was to mine the gene-to-medium relationship. Adequate knowledge of this relationship could potentially improve the accuracy of differentially expressed gene mining. One of the approaches to differentially expressed gene mining uses conventional clustering algorithms to identify the gene-to-medium relationship. Compared to conventional clustering algorithms, self-organization maps (SOMs) identify the nonlinear aspects of the gene-to-medium relationships by mapping the input space into another higher dimensional feature space. However, SOMs are not suitable for huge datasets consisting of millions of samples. Therefore, a new computational model, the Function Clustering Self-Organization Maps (FCSOMs), was developed. FCSOMs take advantage of the theory of granular computing as well as advanced statistical learning methodologies, and are built specifically for each information granule (a function cluster of genes), which are intelligently partitioned by the clustering algorithm provided by the DAVID_6.7 software platform. However, only the gene functions, and not their expression values, are considered in the fuzzy clustering algorithm of DAVID. Compared to the clustering algorithm of DAVID, these experimental results show a marked improvement in the accuracy of classification with the application of FCSOMs. FCSOMs can handle huge datasets and their complex classification problems, as each FCSOM (modeled for each function cluster) can be easily parallelized.

  14. Quick fuzzy backpropagation algorithm.

    PubMed

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  15. Differentiating osteomyelitis from bone infarction in sickle cell disease.

    PubMed

    Wong, A L; Sakamoto, K M; Johnson, E E

    2001-02-01

    This brief review discusses one possible approach to evaluating the sickle cell patient with bone pain. The major differential diagnoses include osteomyelitis and bone infarction. Based on previous studies, we provide an approach to assessing and treating patients with the possible diagnosis of osteomyelitis. An algorithm has been provided, which emphasizes the importance of the initial history and physical examination. Specific radiographic studies are recommended to aid in making the initial assessment and to determine whether the patient has an infarct or osteomyelitis. Differentiating osteomyelitis from infarction in sickle cell patients remains a challenge for the pediatrician. This algorithm can be used as a guide for physicians who evaluate such patients in the acute care setting.

  16. Multiclass cancer diagnosis using tumor gene expression signatures

    DOE PAGES

    Ramaswamy, S.; Tamayo, P.; Rifkin, R.; ...

    2001-12-11

    The optimal treatment of patients with cancer depends on establishing accurate diagnoses by using a complex combination of clinical and histopathological data. In some instances, this task is difficult or impossible because of atypical clinical presentation or histopathology. To determine whether the diagnosis of multiple common adult malignancies could be achieved purely by molecular classification, we subjected 218 tumor samples, spanning 14 common tumor types, and 90 normal tissue samples to oligonucleotide microarray gene expression analysis. The expression levels of 16,063 genes and expressed sequence tags were used to evaluate the accuracy of a multiclass classifier based on a supportmore » vector machine algorithm. Overall classification accuracy was 78%, far exceeding the accuracy of random classification (9%). Poorly differentiated cancers resulted in low-confidence predictions and could not be accurately classified according to their tissue of origin, indicating that they are molecularly distinct entities with dramatically different gene expression patterns compared with their well differentiated counterparts. Taken together, these results demonstrate the feasibility of accurate, multiclass molecular cancer classification and suggest a strategy for future clinical implementation of molecular cancer diagnostics.« less

  17. An investigation of generalized differential evolution metaheuristic for multiobjective optimal crop-mix planning decision.

    PubMed

    Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.

  18. Superconducting Quantum Interference Device Array Based High Frequency Direction Finding on an Airborne Platform

    DTIC Science & Technology

    is performed using the MUSIC algorithm on the signals received on the non-uniform phased array, and the ESPRIT algorithm is used on the signals...received on the non-colocated vector sensor. The simulation results show that the MUSIC algorithm using 2D Bi-SQUIDs is able to differentiate two signals

  19. An algebraic iterative reconstruction technique for differential X-ray phase-contrast computed tomography.

    PubMed

    Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz

    2013-09-01

    Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.

  20. Differential sampling for fast frequency acquisition via adaptive extended least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1987-01-01

    This paper presents a differential signal model along with appropriate sampling techinques for least squares estimation of the frequency and frequency derivatives and possibly the phase and amplitude of a sinusoid received in the presence of noise. The proposed algorithm is recursive in mesurements and thus the computational requirement increases only linearly with the number of measurements. The dimension of the state vector in the proposed algorithm does not depend upon the number of measurements and is quite small, typically around four. This is an advantage when compared to previous algorithms wherein the dimension of the state vector increases monotonically with the product of the frequency uncertainty and the observation period. Such a computational simplification may possibly result in some loss of optimality. However, by applying the sampling techniques of the paper such a possible loss in optimality can made small.

  1. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

  2. Electric machine differential for vehicle traction control and stability control

    NASA Astrophysics Data System (ADS)

    Kuruppu, Sandun Shivantha

    Evolving requirements in energy efficiency and tightening regulations for reliable electric drivetrains drive the advancement of the hybrid electric (HEV) and full electric vehicle (EV) technology. Different configurations of EV and HEV architectures are evaluated for their performance. The future technology is trending towards utilizing distinctive properties in electric machines to not only to improve efficiency but also to realize advanced road adhesion controls and vehicle stability controls. Electric machine differential (EMD) is such a concept under current investigation for applications in the near future. Reliability of a power train is critical. Therefore, sophisticated fault detection schemes are essential in guaranteeing reliable operation of a complex system such as an EMD. The research presented here emphasize on implementation of a 4kW electric machine differential, a novel single open phase fault diagnostic scheme, an implementation of a real time slip optimization algorithm and an electric machine differential based yaw stability improvement study. The proposed d-q current signature based SPO fault diagnostic algorithm detects the fault within one electrical cycle. The EMD based extremum seeking slip optimization algorithm reduces stopping distance by 30% compared to hydraulic braking based ABS.

  3. Mapping gene expression patterns during myeloid differentiation using the EML hematopoietic progenitor cell line.

    PubMed

    Du, Yang; Campbell, Janee L; Nalbant, Demet; Youn, Hyewon; Bass, Ann C Hughes; Cobos, Everardo; Tsai, Schickwann; Keller, Jonathan R; Williams, Simon C

    2002-07-01

    The detailed examination of the molecular events that control the early stages of myeloid differentiation has been hampered by the relative scarcity of hematopoietic stem cells and the lack of suitable cell line models. In this study, we examined the expression of several myeloid and nonmyeloid genes in the murine EML hematopoietic stem cell line. Expression patterns for 19 different genes were examined by Northern blotting and RT-PCR in RNA samples from EML, a variety of other immortalized cell lines, and purified murine hematopoietic stem cells. Representational difference analysis (RDA) was performed to identify differentially expressed genes in EML. Expression patterns of genes encoding transcription factors (four members of the C/EBP family, GATA-1, GATA-2, PU.1, CBFbeta, SCL, and c-myb) in EML were examined and were consistent with the proposed functions of these proteins in hematopoietic differentiation. Expression levels of three markers of terminal myeloid differentiation (neutrophil elastase, proteinase 3, and Mac-1) were highest in EML cells at the later stages of differentiation. In a search for genes that were differentially expressed in EML cells during myeloid differentiation, six cDNAs were isolated. These included three known genes (lysozyme, histidine decarboxylase, and tryptophan hydroxylase) and three novel genes. Expression patterns of known genes in differentiating EML cells accurately reflected their expected expression patterns based on previous studies. The identification of three novel genes, two of which encode proteins that may act as regulators of hematopoietic differentiation, suggests that EML is a useful model system for the molecular analysis of hematopoietic differentiation.

  4. FIVQ algorithm for interference hyper-spectral image compression

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Ma, Caiwen; Zhao, Junsuo

    2014-07-01

    Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.

  5. Vector network analyzer ferromagnetic resonance spectrometer with field differential detection

    NASA Astrophysics Data System (ADS)

    Tamaru, S.; Tsunegi, S.; Kubota, H.; Yuasa, S.

    2018-05-01

    This work presents a vector network analyzer ferromagnetic resonance (VNA-FMR) spectrometer with field differential detection. This technique differentiates the S-parameter by applying a small binary modulation field in addition to the DC bias field to the sample. By setting the modulation frequency sufficiently high, slow sensitivity fluctuations of the VNA, i.e., low-frequency components of the trace noise, which limit the signal-to-noise ratio of the conventional VNA-FMR spectrometer, can be effectively removed, resulting in a very clean FMR signal. This paper presents the details of the hardware implementation and measurement sequence as well as the data processing and analysis algorithms tailored for the FMR spectrum obtained with this technique. Because the VNA measures a complex S-parameter, it is possible to estimate the Gilbert damping parameter from the slope of the phase variation of the S-parameter with respect to the bias field. We show that this algorithm is more robust against noise than the conventional algorithm based on the linewidth.

  6. SOI layout decomposition for double patterning lithography on high-performance computer platforms

    NASA Astrophysics Data System (ADS)

    Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir

    2014-12-01

    In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.

  7. GENERAL: Application of Symplectic Algebraic Dynamics Algorithm to Circular Restricted Three-Body Problem

    NASA Astrophysics Data System (ADS)

    Lu, Wei-Tao; Zhang, Hua; Wang, Shun-Jin

    2008-07-01

    Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP.

  8. Source imaging of potential fields through a matrix space-domain algorithm

    NASA Astrophysics Data System (ADS)

    Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio

    2017-01-01

    Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.

  9. Nuclear fuel management optimization using genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-07-01

    The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less

  10. Safety Case Patterns: Theory and Applications

    NASA Technical Reports Server (NTRS)

    Denney, Ewen W.; Pai, Ganesh J.

    2015-01-01

    We develop the foundations for a theory of patterns of safety case argument structures, clarifying the concepts involved in pattern specification, including choices, labeling, and well-founded recursion. We specify six new patterns in addition to those existing in the literature. We give a generic way to specify the data required to instantiate patterns and a generic algorithm for their instantiation. This generalizes earlier work on generating argument fragments from requirements tables. We describe an implementation of these concepts in AdvoCATE, the Assurance Case Automation Toolset, showing how patterns are defined and can be instantiated. In particular, we describe how our extended notion of patterns can be specified, how they can be instantiated in an interactive manner, and, finally, how they can be automatically instantiated using our algorithm.

  11. Nonlinear inversion of borehole-radar tomography data to reconstruct velocity and attenuation distribution in earth materials

    USGS Publications Warehouse

    Zhou, C.; Liu, L.; Lane, J.W.

    2001-01-01

    A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.

  12. Simple agarose micro-confinement array and machine-learning-based classification for analyzing the patterned differentiation of mesenchymal stem cells

    PubMed Central

    Sato, Asako; Vogel, Viola; Tanaka, Yo

    2017-01-01

    The geometrical confinement of small cell colonies gives differential cues to cells sitting at the periphery versus the core. To utilize this effect, for example to create spatially graded differentiation patterns of human mesenchymal stem cells (hMSCs) in vitro or to investigate underpinning mechanisms, the confinement needs to be robust for extended time periods. To create highly repeatable micro-fabricated structures for cellular patterning and high-throughput data mining, we employed here a simple casting method to fabricate more than 800 adhesive patches confined by agarose micro-walls. In addition, a machine learning based image processing software was developed (open code) to detect the differentiation patterns of the population of hMSCs automatically. Utilizing the agarose walls, the circular patterns of hMSCs were successfully maintained throughout 15 days of cell culture. After staining lipid droplets and alkaline phosphatase as the markers of adipogenic and osteogenic differentiation, respectively, the mega-pixels of RGB color images of hMSCs were processed by the software on a laptop PC within several minutes. The image analysis successfully showed that hMSCs sitting on the more central versus peripheral sections of the adhesive circles showed adipogenic versus osteogenic differentiation as reported previously, indicating the compatibility of patterned agarose walls to conventional microcontact printing. In addition, we found a considerable fraction of undifferentiated cells which are preferentially located at the peripheral part of the adhesive circles, even in differentiation-inducing culture media. In this study, we thus successfully demonstrated a simple framework for analyzing the patterned differentiation of hMSCs in confined microenvironments, which has a range of applications in biology, including stem cell biology. PMID:28380036

  13. The Effects of Topographical Patterns and Sizes on Neural Stem Cell Behavior

    PubMed Central

    Qi, Lin; Li, Ning; Huang, Rong; Song, Qin; Wang, Long; Zhang, Qi; Su, Ruigong; Kong, Tao; Tang, Mingliang; Cheng, Guosheng

    2013-01-01

    Engineered topographical manipulation, a paralleling approach with conventional biochemical cues, has recently attracted the growing interests in utilizations to control stem cell fate. In this study, effects of topological parameters, pattern and size are emphasized on the proliferation and differentiation of adult neural stem cells (ANSCs). We fabricate micro-scale topographical Si wafers with two different feature sizes. These topographical patterns present linear micro-pattern (LMP), circular micro-pattern (CMP) and dot micro-pattern (DMP). The results show that the three topography substrates are suitable for ANSC growth, while they all depress ANSC proliferation when compared to non-patterned substrates (control). Meanwhile, LMP and CMP with two feature sizes can both significantly enhance ANSC differentiation to neurons compared to control. The smaller the feature size is, the better upregulation applies to ANSC for the differentiated neurons. The underlying mechanisms of topography-enhanced neuronal differentiation are further revealed by directing suppression of mitogen-activated protein kinase/extracellular signaling-regulated kinase (MAPK/Erk) signaling pathway in ANSC using U0126, known to inhibit the activation of Erk. The statistical results suggest MAPK/Erk pathway is partially involved in topography-induced differentiation. These observations provide a better understanding on the different roles of topographical cues on stem cell behavior, especially on the selective differentiation, and facilitate to advance the field of stem cell therapy. PMID:23527077

  14. CMOS analogue amplifier circuits optimisation using hybrid backtracking search algorithm with differential evolution

    NASA Astrophysics Data System (ADS)

    Mallick, S.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2016-07-01

    This paper proposes a novel hybrid optimisation algorithm which combines the recently proposed evolutionary algorithm Backtracking Search Algorithm (BSA) with another widely accepted evolutionary algorithm, namely, Differential Evolution (DE). The proposed algorithm called BSA-DE is employed for the optimal designs of two commonly used analogue circuits, namely Complementary Metal Oxide Semiconductor (CMOS) differential amplifier circuit with current mirror load and CMOS two-stage operational amplifier (op-amp) circuit. BSA has a simple structure that is effective, fast and capable of solving multimodal problems. DE is a stochastic, population-based heuristic approach, having the capability to solve global optimisation problems. In this paper, the transistors' sizes are optimised using the proposed BSA-DE to minimise the areas occupied by the circuits and to improve the performances of the circuits. The simulation results justify the superiority of BSA-DE in global convergence properties and fine tuning ability, and prove it to be a promising candidate for the optimal design of the analogue CMOS amplifier circuits. The simulation results obtained for both the amplifier circuits prove the effectiveness of the proposed BSA-DE-based approach over DE, harmony search (HS), artificial bee colony (ABC) and PSO in terms of convergence speed, design specifications and design parameters of the optimal design of the analogue CMOS amplifier circuits. It is shown that BSA-DE-based design technique for each amplifier circuit yields the least MOS transistor area, and each designed circuit is shown to have the best performance parameters such as gain, power dissipation, etc., as compared with those of other recently reported literature.

  15. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  16. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.

    PubMed

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-05-21

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.

  17. Towards multifocal ultrasonic neural stimulation: pattern generation algorithms

    NASA Astrophysics Data System (ADS)

    Hertzberg, Yoni; Naor, Omer; Volovick, Alexander; Shoham, Shy

    2010-10-01

    Focused ultrasound (FUS) waves directed onto neural structures have been shown to dynamically modulate neural activity and excitability, opening up a range of possible systems and applications where the non-invasiveness, safety, mm-range resolution and other characteristics of FUS are advantageous. As in other neuro-stimulation and modulation modalities, the highly distributed and parallel nature of neural systems and neural information processing call for the development of appropriately patterned stimulation strategies which could simultaneously address multiple sites in flexible patterns. Here, we study the generation of sparse multi-focal ultrasonic distributions using phase-only modulation in ultrasonic phased arrays. We analyse the relative performance of an existing algorithm for generating multifocal ultrasonic distributions and new algorithms that we adapt from the field of optical digital holography, and find that generally the weighted Gerchberg-Saxton algorithm leads to overall superior efficiency and uniformity in the focal spots, without significantly increasing the computational burden. By combining phased-array FUS and magnetic-resonance thermometry we experimentally demonstrate the simultaneous generation of tightly focused multifocal distributions in a tissue phantom, a first step towards patterned FUS neuro-modulation systems and devices.

  18. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  19. Pattern identification in time-course gene expression data with the CoGAPS matrix factorization.

    PubMed

    Fertig, Elana J; Stein-O'Brien, Genevieve; Jaffe, Andrew; Colantuoni, Carlo

    2014-01-01

    Patterns in time-course gene expression data can represent the biological processes that are active over the measured time period. However, the orthogonality constraint in standard pattern-finding algorithms, including notably principal components analysis (PCA), confounds expression changes resulting from simultaneous, non-orthogonal biological processes. Previously, we have shown that Markov chain Monte Carlo nonnegative matrix factorization algorithms are particularly adept at distinguishing such concurrent patterns. One such matrix factorization is implemented in the software package CoGAPS. We describe the application of this software and several technical considerations for identification of age-related patterns in a public, prefrontal cortex gene expression dataset.

  20. Validation of MIMGO: a method to identify differentially expressed GO terms in a microarray dataset

    PubMed Central

    2012-01-01

    Background We previously proposed an algorithm for the identification of GO terms that commonly annotate genes whose expression is upregulated or downregulated in some microarray data compared with in other microarray data. We call these “differentially expressed GO terms” and have named the algorithm “matrix-assisted identification method of differentially expressed GO terms” (MIMGO). MIMGO can also identify microarray data in which genes annotated with a differentially expressed GO term are upregulated or downregulated. However, MIMGO has not yet been validated on a real microarray dataset using all available GO terms. Findings We combined Gene Set Enrichment Analysis (GSEA) with MIMGO to identify differentially expressed GO terms in a yeast cell cycle microarray dataset. GSEA followed by MIMGO (GSEA + MIMGO) correctly identified (p < 0.05) microarray data in which genes annotated to differentially expressed GO terms are upregulated. We found that GSEA + MIMGO was slightly less effective than, or comparable to, GSEA (Pearson), a method that uses Pearson’s correlation as a metric, at detecting true differentially expressed GO terms. However, unlike other methods including GSEA (Pearson), GSEA + MIMGO can comprehensively identify the microarray data in which genes annotated with a differentially expressed GO term are upregulated or downregulated. Conclusions MIMGO is a reliable method to identify differentially expressed GO terms comprehensively. PMID:23232071

  1. Configurable pattern-based evolutionary biclustering of gene expression data

    PubMed Central

    2013-01-01

    Background Biclustering algorithms for microarray data aim at discovering functionally related gene sets under different subsets of experimental conditions. Due to the problem complexity and the characteristics of microarray datasets, heuristic searches are usually used instead of exhaustive algorithms. Also, the comparison among different techniques is still a challenge. The obtained results vary in relevant features such as the number of genes or conditions, which makes it difficult to carry out a fair comparison. Moreover, existing approaches do not allow the user to specify any preferences on these properties. Results Here, we present the first biclustering algorithm in which it is possible to particularize several biclusters features in terms of different objectives. This can be done by tuning the specified features in the algorithm or also by incorporating new objectives into the search. Furthermore, our approach bases the bicluster evaluation in the use of expression patterns, being able to recognize both shifting and scaling patterns either simultaneously or not. Evolutionary computation has been chosen as the search strategy, naming thus our proposal Evo-Bexpa (Evolutionary Biclustering based in Expression Patterns). Conclusions We have conducted experiments on both synthetic and real datasets demonstrating Evo-Bexpa abilities to obtain meaningful biclusters. Synthetic experiments have been designed in order to compare Evo-Bexpa performance with other approaches when looking for perfect patterns. Experiments with four different real datasets also confirm the proper performing of our algorithm, whose results have been biologically validated through Gene Ontology. PMID:23433178

  2. Clustering Of Left Ventricular Wall Motion Patterns

    NASA Astrophysics Data System (ADS)

    Bjelogrlic, Z.; Jakopin, J.; Gyergyek, L.

    1982-11-01

    A method for detection of wall regions with similar motion was presented. A model based on local direction information was used to measure the left ventricular wall motion from cineangiographic sequence. Three time functions were used to define segmental motion patterns: distance of a ventricular contour segment from the mean contour, the velocity of a segment and its acceleration. Motion patterns were clustered by the UPGMA algorithm and by an algorithm based on K-nearest neighboor classification rule.

  3. Dynamics of γ-tubulin cytoskeleton in HL-60 leukemia cells undergoing differentiation and apoptosis by all-trans retinoic acid.

    PubMed

    Shariftabrizi, Ahmad; Ahmadian, Shahin; Pazhang, Yaghub

    2012-02-01

    Microtubules are important components of the cell cytoskeleton, participating in protein localization and cell signaling. The capacity of leukemia cells to re-organize their microtubules is considered an integral part of differentiation in these cells in order to become mature granulocytes through treatment with all-trans retinoic acid (ATRA), an established drug for treating acute promyelocytic leukemia. In this study we examined γ-, α- and acetylated-α-tubulin content, their patterns of distribution in the cytoplasm, and the potency of centrosomes in re-organizing microtubules in different stages of ATRA-induced differentiation and apoptosis of the HL-60 cell line. The γ-tubulin content was dramatically increased following differentiation of HL-60 cells, and was then decreased after apoptosis. We also found that γ-tubulin had a diffuse, cytoplasmic pattern following apoptosis compared to the focal, centrosomal accumulation of γ-tubulin in differentiated cells. Differentiated cells had the ability to re-organize their microtubule network following nocodazole challenge testing, whereas undifferentiated cells did not show a similar ability. α-tubulin was more regularly organized in differentiated cells, and did not reveal any specific pattern of polymerization in apoptotic cells. Acetylated-α-tubulin generally followed the same organization patterns after differentiation, as that which occurred for α-tubulin. Our data is suggestive of a centrosomal and organized nucleation pattern of microtubules in HL-60 cells following differentiation, possibly mediated through up-regulation of γ-tubulin.

  4. You're Just Like Your Dad: Intergenerational Patterns of Differential Treatment of Siblings.

    PubMed

    Jensen, Alexander C; Whiteman, Shawn D; Rand, Joseph S; Fingerman, Karen L

    2017-10-01

    Past work highlights that parents' differential treatment has implications for offspring's mental and relational health across the life course. Although the current body of literature has examined offspring- and parent-level correlates of differential treatment, research has yet to consider whether and how patterns of differential treatment are transmitted across generations. As part of a two-wave longitudinal study of 157 families, both grandparents (M age = 76.50 years, SD = 6.20) and parents (M age = 51.10 years, SD = 4.41) reported on differential treatment of their own offspring at both phases. A series of residualized change models revealed support for both continuity and compensation hypotheses. Middle-aged parents tended to model the patterns of differential treatment exhibited by their fathers, but middle-aged men who experienced more differential treatment from their own parents in recent years tended to subsequently exhibit lower levels of differential treatment to their offspring. These findings suggest that patterns of differential treatment both continue and diverge across generations, and those patterns vary by gender. On a broader level, these results also suggest that siblings not only impact one another's development, but in adulthood, they may indirectly influence their nieces' and nephews' development by virtue of their influence on their siblings' parenting. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. An Algorithm for Creating Virtual Controls Using Integrated and Harmonized Longitudinal Data.

    PubMed

    Hansen, William B; Chen, Shyh-Huei; Saldana, Santiago; Ip, Edward H

    2018-06-01

    We introduce a strategy for creating virtual control groups-cases generated through computer algorithms that, when aggregated, may serve as experimental comparators where live controls are difficult to recruit, such as when programs are widely disseminated and randomization is not feasible. We integrated and harmonized data from eight archived longitudinal adolescent-focused data sets spanning the decades from 1980 to 2010. Collectively, these studies examined numerous psychosocial variables and assessed past 30-day alcohol, cigarette, and marijuana use. Additional treatment and control group data from two archived randomized control trials were used to test the virtual control algorithm. Both randomized controlled trials (RCTs) assessed intentions, normative beliefs, and values as well as past 30-day alcohol, cigarette, and marijuana use. We developed an algorithm that used percentile scores from the integrated data set to create age- and gender-specific latent psychosocial scores. The algorithm matched treatment case observed psychosocial scores at pretest to create a virtual control case that figuratively "matured" based on age-related changes, holding the virtual case's percentile constant. Virtual controls matched treatment case occurrence, eliminating differential attrition as a threat to validity. Virtual case substance use was estimated from the virtual case's latent psychosocial score using logistic regression coefficients derived from analyzing the treatment group. Averaging across virtual cases created group estimates of prevalence. Two criteria were established to evaluate the adequacy of virtual control cases: (1) virtual control group pretest drug prevalence rates should match those of the treatment group and (2) virtual control group patterns of drug prevalence over time should match live controls. The algorithm successfully matched pretest prevalence for both RCTs. Increases in prevalence were observed, although there were discrepancies between live and virtual control outcomes. This study provides an initial framework for creating virtual controls using a step-by-step procedure that can now be revised and validated using other prevention trial data.

  6. Seasonal and Inter-Annual Patterns of Phytoplankton Community Structure in Monterey Bay, CA Derived from AVIRIS Data During the 2013-2015 HyspIRI Airborne Campaign

    NASA Astrophysics Data System (ADS)

    Palacios, S. L.; Thompson, D. R.; Kudela, R. M.; Negrey, K.; Guild, L. S.; Gao, B. C.; Green, R. O.; Torres-Perez, J. L.

    2015-12-01

    There is a need in the ocean color community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand ocean biodiversity, to track energy flow through ecosystems, and to identify and monitor for harmful algal blooms. Imaging spectrometer measurements enable use of sophisticated spectroscopic algorithms for applications such as differentiating among coral species, evaluating iron stress of phytoplankton, and discriminating phytoplankton taxa. These advanced algorithms rely on the fine scale, subtle spectral shape of the atmospherically corrected remote sensing reflectance (Rrs) spectrum of the ocean surface. As a consequence, these algorithms are sensitive to inaccuracies in the retrieved Rrs spectrum that may be related to the presence of nearby clouds, inadequate sensor calibration, low sensor signal-to-noise ratio, glint correction, and atmospheric correction. For the HyspIRI Airborne Campaign, flight planning considered optimal weather conditions to avoid flights with significant cloud/fog cover. Although best suited for terrestrial targets, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has enough signal for some coastal chlorophyll algorithms and meets sufficient calibration requirements for most channels. However, the coastal marine environment has special atmospheric correction needs due to error that may be introduced by aerosols and terrestrially sourced atmospheric dust and riverine sediment plumes. For this HyspIRI campaign, careful attention has been given to the correction of AVIRIS imagery of the Monterey Bay to optimize ocean Rrs retrievals for use in estimating chlorophyll (OC3 algorithm) and phytoplankton functional type (PHYDOTax algorithm) data products. This new correction method has been applied to several image collection dates during two oceanographic seasons - upwelling and the warm, stratified oceanic period for 2013 and 2014. These two periods are dominated by either diatom blooms (occasionally toxic) or red tides. Results presented include chlorophyll and phytoplankton community structure and in-water validation data for these dates during these two seasons.

  7. Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation

    NASA Technical Reports Server (NTRS)

    Mook, D. J.; Lew, Jiann-Shiun

    1991-01-01

    Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.

  8. A hybrid optimization algorithm to explore atomic configurations of TiO 2 nanoparticles

    DOE PAGES

    Inclan, Eric J.; Geohegan, David B.; Yoon, Mina

    2017-10-17

    Here in this paper we present a hybrid algorithm comprised of differential evolution, coupled with the Broyden–Fletcher–Goldfarb–Shanno quasi-Newton optimization algorithm, for the purpose of identifying a broad range of (meta)stable Ti nO 2n nanoparticles, as an example system, described by Buckingham interatomic potential. The potential and its gradient are modified to be piece-wise continuous to enable use of these continuous-domain, unconstrained algorithms, thereby improving compatibility. To measure computational effectiveness a regression on known structures is used. This approach defines effectiveness as the ability of an algorithm to produce a set of structures whose energy distribution follows the regression as themore » number of Ti nO 2n increases such that the shape of the distribution is consistent with the algorithm’s stated goals. Our calculation demonstrates that the hybrid algorithm finds global minimum configurations more effectively than the differential evolution algorithms, widely employed in the field of materials science. Specifically, the hybrid algorithm is shown to reproduce the global minimum energy structures reported in the literature up to n = 5, and retains good agreement with the regression up to n = 25. For 25 < n < 100, where literature structures are unavailable, the hybrid effectively obtains structures that are in lower energies per TiO 2 unit as the system size increases.« less

  9. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    PubMed Central

    Gan, Wensheng; Zhang, Binbin

    2015-01-01

    Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038

  10. A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.

    PubMed

    Liu, Chun-Han; Liu, Lian

    2017-05-08

    BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.

  11. Time Parallel Solution of Linear Partial Differential Equations on the Intel Touchstone Delta Supercomputer

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Fijany, A.; Barhen, J.

    1993-01-01

    Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.

  12. Research on the Filtering Algorithm in Speed and Position Detection of Maglev Trains

    PubMed Central

    Dai, Chunhui; Long, Zhiqiang; Xie, Yunde; Xue, Song

    2011-01-01

    This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS) train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train’s structure, the permanent magnet electrodynamic suspension (EDS) train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD) and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally. PMID:22164012

  13. Research on the filtering algorithm in speed and position detection of maglev trains.

    PubMed

    Dai, Chunhui; Long, Zhiqiang; Xie, Yunde; Xue, Song

    2011-01-01

    This paper introduces in brief the traction system of a permanent magnet electrodynamic suspension (EDS) train. The synchronous traction mode based on long stators and track cable is described. A speed and position detection system is recommended. It is installed on board and is used as the feedback end. Restricted by the maglev train's structure, the permanent magnet electrodynamic suspension (EDS) train uses the non-contact method to detect its position. Because of the shake and the track joints, the position signal sent by the position sensor is always aberrant and noisy. To solve this problem, a linear discrete track-differentiator filtering algorithm is proposed. The filtering characters of the track-differentiator (TD) and track-differentiator group are analyzed. The four series of TD are used in the signal processing unit. The result shows that the track-differentiator could have a good effect and make the traction system run normally.

  14. Conceptual Comparison of Population Based Metaheuristics for Engineering Problems

    PubMed Central

    Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes. PMID:25874265

  15. Conceptual comparison of population based metaheuristics for engineering problems.

    PubMed

    Adekanmbi, Oluwole; Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.

  16. A Differential Evolution Algorithm Based on Nikaido-Isoda Function for Solving Nash Equilibrium in Nonlinear Continuous Games

    PubMed Central

    He, Feng; Zhang, Wei; Zhang, Guoqiang

    2016-01-01

    A differential evolution algorithm for solving Nash equilibrium in nonlinear continuous games is presented in this paper, called NIDE (Nikaido-Isoda differential evolution). At each generation, parent and child strategy profiles are compared one by one pairwisely, adapting Nikaido-Isoda function as fitness function. In practice, the NE of nonlinear game model with cubic cost function and quadratic demand function is solved, and this method could also be applied to non-concave payoff functions. Moreover, the NIDE is compared with the existing Nash Domination Evolutionary Multiplayer Optimization (NDEMO), the result showed that NIDE was significantly better than NDEMO with less iterations and shorter running time. These numerical examples suggested that the NIDE method is potentially useful. PMID:27589229

  17. Solving differential equations for Feynman integrals by expansions near singular points

    NASA Astrophysics Data System (ADS)

    Lee, Roman N.; Smirnov, Alexander V.; Smirnov, Vladimir A.

    2018-03-01

    We describe a strategy to solve differential equations for Feynman integrals by powers series expansions near singular points and to obtain high precision results for the corresponding master integrals. We consider Feynman integrals with two scales, i.e. non-trivially depending on one variable. The corresponding algorithm is oriented at situations where canonical form of the differential equations is impossible. We provide a computer code constructed with the help of our algorithm for a simple example of four-loop generalized sunset integrals with three equal non-zero masses and two zero masses. Our code gives values of the master integrals at any given point on the real axis with a required accuracy and a given order of expansion in the regularization parameter ɛ.

  18. Spatial-temporal travel pattern mining using massive taxi trajectory data

    NASA Astrophysics Data System (ADS)

    Zheng, Linjiang; Xia, Dong; Zhao, Xin; Tan, Longyou; Li, Hang; Chen, Li; Liu, Weining

    2018-07-01

    Deep understanding of residents' travel patterns would provide helpful insights into the mechanisms of many socioeconomic phenomena. With the rapid development of location-aware computing technologies, researchers have easy access to large quantities of travel data. As an important data source, taxi trajectory data are featured by their high quality, good continuity and wide distribution, making it suitable for travel pattern mining. In this paper, we use taxi trajectory data to study spatial-temporal characterization of urban residents' travel patterns from two aspects: attractive areas and hot paths. Firstly, a framework of trajectory preprocessing, including data cleaning and extracting the taxi passenger pick-up/drop-off points, is presented to reduce the noise and redundancy in raw trajectory data. Then, a grid density based clustering algorithm is proposed to discover travel attractive areas in different periods of a day. On this basis, we put forward a spatial-temporal trajectory clustering method to discover hot paths among travel attractive areas. Compared with previous algorithms, which only consider the spatial constraint between trajectories, temporal constraint is also considered in our method. Through the experiments, we discuss how to determine the optimal parameters of the two clustering algorithms and verify the effectiveness of the algorithms using real data. Furthermore, we analyze spatial-temporal characterization of Chongqing residents' travel pattern.

  19. Articular dysfunction patterns in patients with mechanical low back pain: A clinical algorithm to guide specific mobilization and manipulation techniques.

    PubMed

    Dewitte, V; Cagnie, B; Barbe, T; Beernaert, A; Vanthillo, B; Danneels, L

    2015-06-01

    Recent systematic reviews have demonstrated reasonable evidence that lumbar mobilization and manipulation techniques are beneficial. However, knowledge on optimal techniques and doses, and its clinical reasoning is currently lacking. To address this, a clinical algorithm is presented so as to guide therapists in their clinical reasoning to identify patients who are likely to respond to lumbar mobilization and/or manipulation and to direct appropriate technique selection. Key features in subjective and clinical examination suggestive of mechanical nociceptive pain probably arising from articular structures, can categorize patients into distinct articular dysfunction patterns. Based on these patterns, specific mobilization and manipulation techniques are suggested. This clinical algorithm is merely based on empirical clinical expertise and complemented through knowledge exchange between international colleagues. The added value of the proposed articular dysfunction patterns should be considered within a broader perspective. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. A sequential coalescent algorithm for chromosomal inversions

    PubMed Central

    Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M

    2013-01-01

    Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894

  1. Automated Processing of 2-D Gel Electrophoretograms of Genomic DNA for Hunting Pathogenic DNA Molecular Changes.

    PubMed

    Takahashi; Nakazawa; Watanabe; Konagaya

    1999-01-01

    We have developed the automated processing algorithms for 2-dimensional (2-D) electrophoretograms of genomic DNA based on RLGS (Restriction Landmark Genomic Scanning) method, which scans the restriction enzyme recognition sites as the landmark and maps them onto a 2-D electrophoresis gel. Our powerful processing algorithms realize the automated spot recognition from RLGS electrophoretograms and the automated comparison of a huge number of such images. In the final stage of the automated processing, a master spot pattern, on which all the spots in the RLGS images are mapped at once, can be obtained. The spot pattern variations which seemed to be specific to the pathogenic DNA molecular changes can be easily detected by simply looking over the master spot pattern. When we applied our algorithms to the analysis of 33 RLGS images derived from human colon tissues, we successfully detected several colon tumor specific spot pattern changes.

  2. A Differential Evolution Based Approach to Estimate the Shape and Size of Complex Shaped Anomalies Using EIT Measurements

    NASA Astrophysics Data System (ADS)

    Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn

    EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.

  3. Auxin Influx Carriers Control Vascular Patterning and Xylem Differentiation in Arabidopsis thaliana

    PubMed Central

    Siligato, Riccardo; Alonso, Jose M.; Swarup, Ranjan; Bennett, Malcolm J.; Mähönen, Ari Pekka; Caño-Delgado, Ana I.; Ibañes, Marta

    2015-01-01

    Auxin is an essential hormone for plant growth and development. Auxin influx carriers AUX1/LAX transport auxin into the cell, while auxin efflux carriers PIN pump it out of the cell. It is well established that efflux carriers play an important role in the shoot vascular patterning, yet the contribution of influx carriers to the shoot vasculature remains unknown. Here, we combined theoretical and experimental approaches to decipher the role of auxin influx carriers in the patterning and differentiation of vascular tissues in the Arabidopsis inflorescence stem. Our theoretical analysis predicts that influx carriers facilitate periodic patterning and modulate the periodicity of auxin maxima. In agreement, we observed fewer and more spaced vascular bundles in quadruple mutants plants of the auxin influx carriers aux1lax1lax2lax3. Furthermore, we show AUX1/LAX carriers promote xylem differentiation in both the shoot and the root tissues. Influx carriers increase cytoplasmic auxin signaling, and thereby differentiation. In addition to this cytoplasmic role of auxin, our computational simulations propose a role for extracellular auxin as an inhibitor of xylem differentiation. Altogether, our study shows that auxin influx carriers AUX1/LAX regulate vascular patterning and differentiation in plants. PMID:25922946

  4. Simultaneous classification of Oranges and Apples Using Grover's and Ventura' Algorithms in a Two-qubits System

    NASA Astrophysics Data System (ADS)

    Singh, Manu Pratap; Radhey, Kishori; Kumar, Sandeep

    2017-08-01

    In the present paper, simultaneous classification of Orange and Apple has been carried out using both Grover's iterative algorithm (Grover 1996) and Ventura's model (Ventura and Martinez, Inf. Sci. 124, 273-296, 2000) taking different superposition of two- pattern start state containing Orange and Apple both, one- pattern start state containing Apple as search state and another one- pattern start state containing Orange as search state. It has been shown that the exclusion superposition is the most suitable two- pattern search state for simultaneous classification of pattern associated with Apples and Oranges and the superposition of phase-invariance are the best choice as the respective search state based on one -pattern start-states in both Grover's and Ventura's methods of classifications of patterns.

  5. A comparative analysis of DBSCAN, K-means, and quadratic variation algorithms for automatic identification of swallows from swallowing accelerometry signals.

    PubMed

    Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin

    2015-04-01

    Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A Comparative Analysis of DBSCAN, K-Means, and Quadratic Variation Algorithms for Automatic Identification of Swallows from Swallowing Accelerometry Signals

    PubMed Central

    Dudik, Joshua M.; Kurosu, Atsuko; Coyle, James L

    2015-01-01

    Background Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. Methods In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Results Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differen-tiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. Conclusions In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. PMID:25658505

  7. Application of wildfire spread and behavior models to assess fire probability and severity in the Mediterranean region

    NASA Astrophysics Data System (ADS)

    Salis, Michele; Arca, Bachisio; Bacciu, Valentina; Spano, Donatella; Duce, Pierpaolo; Santoni, Paul; Ager, Alan; Finney, Mark

    2010-05-01

    Characterizing the spatial pattern of large fire occurrence and severity is an important feature of the fire management planning in the Mediterranean region. The spatial characterization of fire probabilities, fire behavior distributions and value changes are key components for quantitative risk assessment and for prioritizing fire suppression resources, fuel treatments and law enforcement. Because of the growing wildfire severity and frequency in recent years (e.g.: Portugal, 2003 and 2005; Italy and Greece, 2007 and 2009), there is an increasing demand for models and tools that can aid in wildfire prediction and prevention. Newer wildfire simulation systems offer promise in this regard, and allow for fine scale modeling of wildfire severity and probability. Several new applications has resulted from the development of a minimum travel time (MTT) fire spread algorithm (Finney, 2002), that models the fire growth searching for the minimum time for fire to travel among nodes in a 2D network. The MTT approach makes computationally feasible to simulate thousands of fires and generate burn probability and fire severity maps over large areas. The MTT algorithm is imbedded in a number of research and fire modeling applications. High performance computers are typically used for MTT simulations, although the algorithm is also implemented in the FlamMap program (www.fire.org). In this work, we described the application of the MTT algorithm to estimate spatial patterns of burn probability and to analyze wildfire severity in three fire prone areas of the Mediterranean Basin, specifically Sardinia (Italy), Sicily (Italy) and Corsica (France) islands. We assembled fuels and topographic data for the simulations in 500 x 500 m grids for the study areas. The simulations were run using 100,000 ignitions under weather conditions that replicated severe and moderate weather conditions (97th and 70th percentile, July and August weather, 1995-2007). We used both random ignition locations and ignition probability grids (1000 x 1000 m) built from historical fire data (1995-2007). The simulation outputs were then examined to understand relationships between burn probability and specific vegetation types and ignition sources. Wildfire threats to specific values of human interest were quantified to map landscape patterns of wildfire risk. The simulation outputs also allowed us to differentiate between areas of the landscape that were progenitors of fires versus "victims" of large fires. The results provided spatially explicit data on wildfire likelihood and intensity that can be used in a variety of strategic and tactical planning forums to mitigate wildfire threats to human and other values in the Mediterranean Basin.

  8. Tracking Problem Solving by Multivariate Pattern Analysis and Hidden Markov Model Algorithms

    ERIC Educational Resources Information Center

    Anderson, John R.

    2012-01-01

    Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application…

  9. Artificial intelligence in diagnosis of obstructive lung disease: current status and future potential.

    PubMed

    Das, Nilakash; Topalovic, Marko; Janssens, Wim

    2018-03-01

    The application of artificial intelligence in the diagnosis of obstructive lung diseases is an exciting phenomenon. Artificial intelligence algorithms work by finding patterns in data obtained from diagnostic tests, which can be used to predict clinical outcomes or to detect obstructive phenotypes. The purpose of this review is to describe the latest trends and to discuss the future potential of artificial intelligence in the diagnosis of obstructive lung diseases. Machine learning has been successfully used in automated interpretation of pulmonary function tests for differential diagnosis of obstructive lung diseases. Deep learning models such as convolutional neural network are state-of-the art for obstructive pattern recognition in computed tomography. Machine learning has also been applied in other diagnostic approaches such as forced oscillation test, breath analysis, lung sound analysis and telemedicine with promising results in small-scale studies. Overall, the application of artificial intelligence has produced encouraging results in the diagnosis of obstructive lung diseases. However, large-scale studies are still required to validate current findings and to boost its adoption by the medical community.

  10. Comparison of Algorithm-based Estimates of Occupational Diesel Exhaust Exposure to Those of Multiple Independent Raters in a Population-based Case–Control Study

    PubMed Central

    Friesen, Melissa C.

    2013-01-01

    Objectives: Algorithm-based exposure assessments based on patterns in questionnaire responses and professional judgment can readily apply transparent exposure decision rules to thousands of jobs quickly. However, we need to better understand how algorithms compare to a one-by-one job review by an exposure assessor. We compared algorithm-based estimates of diesel exhaust exposure to those of three independent raters within the New England Bladder Cancer Study, a population-based case–control study, and identified conditions under which disparities occurred in the assessments of the algorithm and the raters. Methods: Occupational diesel exhaust exposure was assessed previously using an algorithm and a single rater for all 14 983 jobs reported by 2631 study participants during personal interviews conducted from 2001 to 2004. Two additional raters independently assessed a random subset of 324 jobs that were selected based on strata defined by the cross-tabulations of the algorithm and the first rater’s probability assessments for each job, oversampling their disagreements. The algorithm and each rater assessed the probability, intensity and frequency of occupational diesel exhaust exposure, as well as a confidence rating for each metric. Agreement among the raters, their aggregate rating (average of the three raters’ ratings) and the algorithm were evaluated using proportion of agreement, kappa and weighted kappa (κw). Agreement analyses on the subset used inverse probability weighting to extrapolate the subset to estimate agreement for all jobs. Classification and Regression Tree (CART) models were used to identify patterns in questionnaire responses that predicted disparities in exposure status (i.e., unexposed versus exposed) between the first rater and the algorithm-based estimates. Results: For the probability, intensity and frequency exposure metrics, moderate to moderately high agreement was observed among raters (κw = 0.50–0.76) and between the algorithm and the individual raters (κw = 0.58–0.81). For these metrics, the algorithm estimates had consistently higher agreement with the aggregate rating (κw = 0.82) than with the individual raters. For all metrics, the agreement between the algorithm and the aggregate ratings was highest for the unexposed category (90–93%) and was poor to moderate for the exposed categories (9–64%). Lower agreement was observed for jobs with a start year <1965 versus ≥1965. For the confidence metrics, the agreement was poor to moderate among raters (κw = 0.17–0.45) and between the algorithm and the individual raters (κw = 0.24–0.61). CART models identified patterns in the questionnaire responses that predicted a fair-to-moderate (33–89%) proportion of the disagreements between the raters’ and the algorithm estimates. Discussion: The agreement between any two raters was similar to the agreement between an algorithm-based approach and individual raters, providing additional support for using the more efficient and transparent algorithm-based approach. CART models identified some patterns in disagreements between the first rater and the algorithm. Given the absence of a gold standard for estimating exposure, these patterns can be reviewed by a team of exposure assessors to determine whether the algorithm should be revised for future studies. PMID:23184256

  11. A Segment-Based Trajectory Similarity Measure in the Urban Transportation Systems.

    PubMed

    Mao, Yingchi; Zhong, Haishi; Xiao, Xianjian; Li, Xiaofang

    2017-03-06

    With the rapid spread of built-in GPS handheld smart devices, the trajectory data from GPS sensors has grown explosively. Trajectory data has spatio-temporal characteristics and rich information. Using trajectory data processing techniques can mine the patterns of human activities and the moving patterns of vehicles in the intelligent transportation systems. A trajectory similarity measure is one of the most important issues in trajectory data mining (clustering, classification, frequent pattern mining, etc.). Unfortunately, the main similarity measure algorithms with the trajectory data have been found to be inaccurate, highly sensitive of sampling methods, and have low robustness for the noise data. To solve the above problems, three distances and their corresponding computation methods are proposed in this paper. The point-segment distance can decrease the sensitivity of the point sampling methods. The prediction distance optimizes the temporal distance with the features of trajectory data. The segment-segment distance introduces the trajectory shape factor into the similarity measurement to improve the accuracy. The three kinds of distance are integrated with the traditional dynamic time warping algorithm (DTW) algorithm to propose a new segment-based dynamic time warping algorithm (SDTW). The experimental results show that the SDTW algorithm can exhibit about 57%, 86%, and 31% better accuracy than the longest common subsequence algorithm (LCSS), and edit distance on real sequence algorithm (EDR) , and DTW, respectively, and that the sensitivity to the noise data is lower than that those algorithms.

  12. An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies.

    PubMed

    Xiang, Wan-li; Meng, Xue-lei; An, Mei-qing; Li, Yin-zhen; Gao, Ming-xia

    2015-01-01

    Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, opposition-based learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/pbest/bin/1 for the sake of accelerating standard DE and preventing DE from clustering around the global best individual, as well as a perturbation scheme for further avoiding premature convergence, are integrated. In addition, we also introduce two linear time-varying functions, which are used to decide which solution search equation is chosen at the phases of mutation and perturbation, respectively. Experimental results tested on twenty-five benchmark functions show that EDE is far better than the standard DE. In further comparisons, EDE is compared with other five state-of-the-art approaches and related results show that EDE is still superior to or at least equal to these methods on most of benchmark functions.

  13. Synchronous versus asynchronous modeling of gene regulatory networks.

    PubMed

    Garg, Abhishek; Di Cara, Alessandro; Xenarios, Ioannis; Mendoza, Luis; De Micheli, Giovanni

    2008-09-01

    In silico modeling of gene regulatory networks has gained some momentum recently due to increased interest in analyzing the dynamics of biological systems. This has been further facilitated by the increasing availability of experimental data on gene-gene, protein-protein and gene-protein interactions. The two dynamical properties that are often experimentally testable are perturbations and stable steady states. Although a lot of work has been done on the identification of steady states, not much work has been reported on in silico modeling of cellular differentiation processes. In this manuscript, we provide algorithms based on reduced ordered binary decision diagrams (ROBDDs) for Boolean modeling of gene regulatory networks. Algorithms for synchronous and asynchronous transition models have been proposed and their corresponding computational properties have been analyzed. These algorithms allow users to compute cyclic attractors of large networks that are currently not feasible using existing software. Hereby we provide a framework to analyze the effect of multiple gene perturbation protocols, and their effect on cell differentiation processes. These algorithms were validated on the T-helper model showing the correct steady state identification and Th1-Th2 cellular differentiation process. The software binaries for Windows and Linux platforms can be downloaded from http://si2.epfl.ch/~garg/genysis.html.

  14. Symmetry dependence of holograms for optical trapping

    NASA Astrophysics Data System (ADS)

    Curtis, Jennifer E.; Schmitz, Christian H. J.; Spatz, Joachim P.

    2005-08-01

    No iterative algorithm is necessary to calculate holograms for most holographic optical trapping patterns. Instead, holograms may be produced by a simple extension of the prisms-and-lenses method. This formulaic approach yields the same diffraction efficiency as iterative algorithms for any asymmetric or symmetric but nonperiodic pattern of points while requiring less calculation time. A slight spatial disordering of periodic patterns significantly reduces intensity variations between the different traps without extra calculation costs. Eliminating laborious hologram calculations should greatly facilitate interactive holographic trapping.

  15. Metacarpophalangeal pattern profile analysis: useful diagnostic tool for differentiating between dyschondrosteosis, Turner syndrome, and hypochondroplasia.

    PubMed

    Laurencikas, E; Sävendahl, L; Jorulf, H

    2006-06-01

    To assess the value of the metacarpophalangeal pattern profile (MCPP) analysis as a diagnostic tool for differentiating between patients with dyschondrosteosis, Turner syndrome, and hypochondroplasia. Radiographic and clinical data from 135 patients between 1 and 51 years of age were collected and analyzed. The study included 25 patients with hypochondroplasia (HCP), 39 with dyschondrosteosis (LWD), and 71 with Turner syndrome (TS). Hand pattern profiles were calculated and compared with those of 110 normal individuals. Pearson correlation coefficient (r) and multivariate discriminant analysis were used for pattern profile analysis. Pattern variability index, a measure of dysmorphogenesis, was calculated for LWD, TS, HCP, and normal controls. Our results demonstrate that patients with LWD, TS, or HCP have distinct pattern profiles that are significantly different from each other and from those of normal controls. Discriminant analysis yielded correct classification of normal versus abnormal individuals in 84% of cases. Classification of the patients into LWD, TS, and HCP groups was successful in 75%. The correct classification rate was higher (85%) when differentiating two pathological groups at a time. Pattern variability index was not helpful for differential diagnosis of LWD, TS, and HCP. Patients with LWD, TS, or HCP have distinct MCPPs and can be successfully differentiated from each other using advanced MCPP analysis. Discriminant analysis is to be preferred over Pearson correlation coefficient because it is a more sensitive and specific technique. MCPP analysis is a helpful tool for differentiating between syndromes with similar clinical and radiological abnormalities.

  16. A novel directional asymmetric sampling search algorithm for fast block-matching motion estimation

    NASA Astrophysics Data System (ADS)

    Li, Yue-e.; Wang, Qiang

    2011-11-01

    This paper proposes a novel directional asymmetric sampling search (DASS) algorithm for video compression. Making full use of the error information (block distortions) of the search patterns, eight different direction search patterns are designed for various situations. The strategy of local sampling search is employed for the search of big-motion vector. In order to further speed up the search, early termination strategy is adopted in procedure of DASS. Compared to conventional fast algorithms, the proposed method has the most satisfactory PSNR values for all test sequences.

  17. Consistency functional map propagation for repetitive patterns

    NASA Astrophysics Data System (ADS)

    Wang, Hao

    2017-09-01

    Repetitive patterns appear frequently in both man-made and natural environments. Automatically and robustly detecting such patterns from an image is a challenging problem. We study repetitive pattern alignment by embedding segmentation cue with a functional map model. However, this model cannot tackle the repetitive patterns directly due to the large photometric and geometric variations. Thus, a consistency functional map propagation (CFMP) algorithm that extends the functional map with dynamic propagation is proposed to address this issue. This propagation model is acquired in two steps. The first one aligns the patterns from a local region, transferring segmentation functions among patterns. It can be cast as an L norm optimization problem. The latter step updates the template segmentation for the next round of pattern discovery by merging the transferred segmentation functions. Extensive experiments and comparative analyses have demonstrated an encouraging performance of the proposed algorithm in detection and segmentation of repetitive patterns.

  18. Integrability of systems of two second-order ordinary differential equations admitting four-dimensional Lie algebras

    PubMed Central

    Gazizov, R. K.

    2017-01-01

    We suggest an algorithm for integrating systems of two second-order ordinary differential equations with four symmetries. In particular, if the admitted transformation group has two second-order differential invariants, the corresponding system can be integrated by quadratures using invariant representation and the operator of invariant differentiation. Otherwise, the systems reduce to partially uncoupled forms and can also be integrated by quadratures. PMID:28265184

  19. Simplifying Differential Equations for Multiscale Feynman Integrals beyond Multiple Polylogarithms.

    PubMed

    Adams, Luise; Chaubey, Ekta; Weinzierl, Stefan

    2017-04-07

    In this Letter we exploit factorization properties of Picard-Fuchs operators to decouple differential equations for multiscale Feynman integrals. The algorithm reduces the differential equations to blocks of the size of the order of the irreducible factors of the Picard-Fuchs operator. As a side product, our method can be used to easily convert the differential equations for Feynman integrals which evaluate to multiple polylogarithms to an ϵ form.

  20. Integrability of systems of two second-order ordinary differential equations admitting four-dimensional Lie algebras.

    PubMed

    Gainetdinova, A A; Gazizov, R K

    2017-01-01

    We suggest an algorithm for integrating systems of two second-order ordinary differential equations with four symmetries. In particular, if the admitted transformation group has two second-order differential invariants, the corresponding system can be integrated by quadratures using invariant representation and the operator of invariant differentiation. Otherwise, the systems reduce to partially uncoupled forms and can also be integrated by quadratures.

  1. Implementing a self-structuring data learning algorithm

    NASA Astrophysics Data System (ADS)

    Graham, James; Carson, Daniel; Ternovskiy, Igor

    2016-05-01

    In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.

  2. An efficient algorithm for pairwise local alignment of protein interaction networks

    DOE PAGES

    Chen, Wenbin; Schmidt, Matthew; Tian, Wenhong; ...

    2015-04-01

    Recently, researchers seeking to understand, modify, and create beneficial traits in organisms have looked for evolutionarily conserved patterns of protein interactions. Their conservation likely means that the proteins of these conserved functional modules are important to the trait's expression. In this paper, we formulate the problem of identifying these conserved patterns as a graph optimization problem, and develop a fast heuristic algorithm for this problem. We compare the performance of our network alignment algorithm to that of the MaWISh algorithm [Koyuturk M, Kim Y, Topkara U, Subramaniam S, Szpankowski W, Grama A, Pairwise alignment of protein interaction networks, J Computmore » Biol 13(2): 182-199, 2006.], which bases its search algorithm on a related decision problem formulation. We find that our algorithm discovers conserved modules with a larger number of proteins in an order of magnitude less time. In conclusion, the protein sets found by our algorithm correspond to known conserved functional modules at comparable precision and recall rates as those produced by the MaWISh algorithm.« less

  3. Computational Algorithms or Identification of Distributed Parameter Systems

    DTIC Science & Technology

    1993-04-24

    delay-differential equations, Volterra integral equations, and partial differential equations with memory terms . In particular we investigated a...tested for estimating parameters in a Volterra integral equation arising from a viscoelastic model of a flexible structure with Boltzmann damping. In...particular, one of the parameters identified was the order of the derivative in Volterra integro-differential equations containing fractional

  4. Implementation theory of distortion-invariant pattern recognition for optical and digital signal processing systems

    NASA Astrophysics Data System (ADS)

    Lhamon, Michael Earl

    A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.

  5. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  6. Research on numerical algorithms for large space structures

    NASA Technical Reports Server (NTRS)

    Denman, E. D.

    1981-01-01

    Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.

  7. AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.

    PubMed

    Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou

    2017-01-01

    In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.

  8. On several aspects and applications of the multigrid method for solving partial differential equations

    NASA Technical Reports Server (NTRS)

    Dinar, N.

    1978-01-01

    Several aspects of multigrid methods are briefly described. The main subjects include the development of very efficient multigrid algorithms for systems of elliptic equations (Cauchy-Riemann, Stokes, Navier-Stokes), as well as the development of control and prediction tools (based on local mode Fourier analysis), used to analyze, check and improve these algorithms. Preliminary research on multigrid algorithms for time dependent parabolic equations is also described. Improvements in existing multigrid processes and algorithms for elliptic equations were studied.

  9. Early In Vitro Differentiation of Mouse Definitive Endoderm Is Not Correlated with Progressive Maturation of Nuclear DNA Methylation Patterns

    PubMed Central

    Tajbakhsh, Jian; Gertych, Arkadiusz; Fagg, W. Samuel; Hatada, Seigo; Fair, Jeffrey H.

    2011-01-01

    The genome organization in pluripotent cells undergoing the first steps of differentiation is highly relevant to the reprogramming process in differentiation. Considering this fact, chromatin texture patterns that identify cells at the very early stage of lineage commitment could serve as valuable tools in the selection of optimal cell phenotypes for regenerative medicine applications. Here we report on the first-time use of high-resolution three-dimensional fluorescence imaging and comprehensive topological cell-by-cell analyses with a novel image-cytometrical approach towards the identification of in situ global nuclear DNA methylation patterns in early endodermal differentiation of mouse ES cells (up to day 6), and the correlations of these patterns with a set of putative markers for pluripotency and endodermal commitment, and the epithelial and mesenchymal character of cells. Utilizing this in vitro cell system as a model for assessing the relationship between differentiation and nuclear DNA methylation patterns, we found that differentiating cell populations display an increasing number of cells with a gain in DNA methylation load: first within their euchromatin, then extending into heterochromatic areas of the nucleus, which also results in significant changes of methylcytosine/global DNA codistribution patterns. We were also able to co-visualize and quantify the concomitant stochastic marker expression on a per-cell basis, for which we did not measure any correlation to methylcytosine loads or distribution patterns. We observe that the progression of global DNA methylation is not correlated with the standard transcription factors associated with endodermal development. Further studies are needed to determine whether the progression of global methylation could represent a useful signature of cellular differentiation. This concept of tracking epigenetic progression may prove useful in the selection of cell phenotypes for future regenerative medicine applications. PMID:21779341

  10. Exponential integration algorithms applied to viscoplasticity

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.

  11. FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector.

    PubMed

    Schäfer, Dirk; Grass, Michael; van de Haar, Peter

    2011-07-01

    Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results. The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.

  12. Computational discovery of pathway-level genetic vulnerabilities in non-small-cell lung cancer

    PubMed Central

    Young, Jonathan H.; Peyton, Michael; Seok Kim, Hyun; McMillan, Elizabeth; Minna, John D.; White, Michael A.; Marcotte, Edward M.

    2016-01-01

    Motivation: Novel approaches are needed for discovery of targeted therapies for non-small-cell lung cancer (NSCLC) that are specific to certain patients. Whole genome RNAi screening of lung cancer cell lines provides an ideal source for determining candidate drug targets. Results: Unsupervised learning algorithms uncovered patterns of differential vulnerability across lung cancer cell lines to loss of functionally related genes. Such genetic vulnerabilities represent candidate targets for therapy and are found to be involved in splicing, translation and protein folding. In particular, many NSCLC cell lines were especially sensitive to the loss of components of the LSm2-8 protein complex or the CCT/TRiC chaperonin. Different vulnerabilities were also found for different cell line subgroups. Furthermore, the predicted vulnerability of a single adenocarcinoma cell line to loss of the Wnt pathway was experimentally validated with screening of small-molecule Wnt inhibitors against an extensive cell line panel. Availability and implementation: The clustering algorithm is implemented in Python and is freely available at https://bitbucket.org/youngjh/nsclc_paper. Contact: marcotte@icmb.utexas.edu or jon.young@utexas.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26755624

  13. Computational discovery of pathway-level genetic vulnerabilities in non-small-cell lung cancer.

    PubMed

    Young, Jonathan H; Peyton, Michael; Seok Kim, Hyun; McMillan, Elizabeth; Minna, John D; White, Michael A; Marcotte, Edward M

    2016-05-01

    Novel approaches are needed for discovery of targeted therapies for non-small-cell lung cancer (NSCLC) that are specific to certain patients. Whole genome RNAi screening of lung cancer cell lines provides an ideal source for determining candidate drug targets. Unsupervised learning algorithms uncovered patterns of differential vulnerability across lung cancer cell lines to loss of functionally related genes. Such genetic vulnerabilities represent candidate targets for therapy and are found to be involved in splicing, translation and protein folding. In particular, many NSCLC cell lines were especially sensitive to the loss of components of the LSm2-8 protein complex or the CCT/TRiC chaperonin. Different vulnerabilities were also found for different cell line subgroups. Furthermore, the predicted vulnerability of a single adenocarcinoma cell line to loss of the Wnt pathway was experimentally validated with screening of small-molecule Wnt inhibitors against an extensive cell line panel. The clustering algorithm is implemented in Python and is freely available at https://bitbucket.org/youngjh/nsclc_paper marcotte@icmb.utexas.edu or jon.young@utexas.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  14. Sensitivity analysis of dynamic biological systems with time-delays.

    PubMed

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2010-10-15

    Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.

  15. Efficiently computing exact geodesic loops within finite steps.

    PubMed

    Xin, Shi-Qing; He, Ying; Fu, Chi-Wing

    2012-06-01

    Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.

  16. Discovering biclusters in gene expression data based on high-dimensional linear geometries

    PubMed Central

    Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong

    2008-01-01

    Background In DNA microarray experiments, discovering groups of genes that share similar transcriptional characteristics is instrumental in functional annotation, tissue classification and motif identification. However, in many situations a subset of genes only exhibits consistent pattern over a subset of conditions. Conventional clustering algorithms that deal with the entire row or column in an expression matrix would therefore fail to detect these useful patterns in the data. Recently, biclustering has been proposed to detect a subset of genes exhibiting consistent pattern over a subset of conditions. However, most existing biclustering algorithms are based on searching for sub-matrices within a data matrix by optimizing certain heuristically defined merit functions. Moreover, most of these algorithms can only detect a restricted set of bicluster patterns. Results In this paper, we present a novel geometric perspective for the biclustering problem. The biclustering process is interpreted as the detection of linear geometries in a high dimensional data space. Such a new perspective views biclusters with different patterns as hyperplanes in a high dimensional space, and allows us to handle different types of linear patterns simultaneously by matching a specific set of linear geometries. This geometric viewpoint also inspires us to propose a generic bicluster pattern, i.e. the linear coherent model that unifies the seemingly incompatible additive and multiplicative bicluster models. As a particular realization of our framework, we have implemented a Hough transform-based hyperplane detection algorithm. The experimental results on human lymphoma gene expression dataset show that our algorithm can find biologically significant subsets of genes. Conclusion We have proposed a novel geometric interpretation of the biclustering problem. We have shown that many common types of bicluster are just different spatial arrangements of hyperplanes in a high dimensional data space. An implementation of the geometric framework using the Fast Hough transform for hyperplane detection can be used to discover biologically significant subsets of genes under subsets of conditions for microarray data analysis. PMID:18433477

  17. An Image Encryption Algorithm Based on Information Hiding

    NASA Astrophysics Data System (ADS)

    Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu

    Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.

  18. Association between ICP pulse waveform morphology and ICP B waves.

    PubMed

    Kasprowicz, Magdalena; Bergsneider, Marvin; Czosnyka, Marek; Hu, Xiao

    2012-01-01

    The study aimed to investigate changes in the shape of ICP pulses associated with different patterns of the ICP slow waves (0.5-2.0 cycles/min) during ICP overnight monitoring in hydrocephalus. Four patterns of ICP slow waves were characterized in 44 overnight ICP recordings (no waves - NW, slow symmetrical waves - SW, slow asymmetrical waves - AS, slow waves with plateau phase - PW). The morphological clustering and analysis of ICP pulse (MOCAIP) algorithm was utilized to calculate a set of metrics describing ICP pulse morphology based on the location of three sub-peaks in an ICP pulse: systolic peak (P(1)), tidal peak (P(2)) and dicrotic peak (P(3)). Step-wise discriminant analysis was applied to select the most characteristic morphological features to distinguish between different ICP slow waves. Based on relative changes in variability of amplitudes of P(2) and P(3) we were able to distinguish between the combined groups NW + SW and AS + PW (p < 0.000001). The AS pattern can be differentiated from PW based on respective changes in the mean curvature of P(2) and P(3) (p < 0.000001); however, none of the MOCAIP feature separates between NW and SW. The investigation of ICP pulse morphology associated with different ICP B waves may provide additional information for analysing recordings of overnight ICP.

  19. A conserved pattern of differential expansion of cortical areas in simian primates.

    PubMed

    Chaplin, Tristan A; Yu, Hsin-Hao; Soares, Juliana G M; Gattass, Ricardo; Rosa, Marcello G P

    2013-09-18

    The layout of areas in the cerebral cortex of different primates is quite similar, despite significant variations in brain size. However, it is clear that larger brains are not simply scaled up versions of smaller brains: some regions of the cortex are disproportionately large in larger species. It is currently debated whether these expanded areas arise through natural selection pressures for increased cognitive capacity or as a result of the application of a common developmental sequence on different scales. Here, we used computational methods to map and quantify the expansion of the cortex in simian primates of different sizes to investigate whether there is any common pattern of cortical expansion. Surface models of the marmoset, capuchin, and macaque monkey cortex were registered using the software package CARET and the spherical landmark vector difference algorithm. The registration was constrained by the location of identified homologous cortical areas. When comparing marmosets with both capuchins and macaques, we found a high degree of expansion in the temporal parietal junction, the ventrolateral prefrontal cortex, and the dorsal anterior cingulate cortex, all of which are high-level association areas typically involved in complex cognitive and behavioral functions. These expanded maps correlated well with previously published macaque to human registrations, suggesting that there is a general pattern of primate cortical scaling.

  20. Novel flowcytometry-based approach of malignant cell detection in body fluids using an automated hematology analyzer

    PubMed Central

    Tabe, Yoko; Takemura, Hiroyuki; Kimura, Konobu; Takahashi, Toshihiro; Yang, Haeun; Tsuchiya, Koji; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Ohsaka, Akimichi

    2018-01-01

    Morphological microscopic examinations of nucleated cells in body fluid (BF) samples are performed to screen malignancy. However, the morphological differentiation is time-consuming and labor-intensive. This study aimed to develop a new flowcytometry-based gating analysis mode “XN-BF gating algorithm” to detect malignant cells using an automated hematology analyzer, Sysmex XN-1000. XN-BF mode was equipped with WDF white blood cell (WBC) differential channel. We added two algorithms to the WDF channel: Rule 1 detects larger and clumped cell signals compared to the leukocytes, targeting the clustered malignant cells; Rule 2 detects middle sized mononuclear cells containing less granules than neutrophils with similar fluorescence signal to monocytes, targeting hematological malignant cells and solid tumor cells. BF samples that meet, at least, one rule were detected as malignant. To evaluate this novel gating algorithm, 92 various BF samples were collected. Manual microscopic differentiation with the May-Grunwald Giemsa stain and WBC count with hemocytometer were also performed. The performance of these three methods were evaluated by comparing with the cytological diagnosis. The XN-BF gating algorithm achieved sensitivity of 63.0% and specificity of 87.8% with 68.0% for positive predictive value and 85.1% for negative predictive value in detecting malignant-cell positive samples. Manual microscopic WBC differentiation and WBC count demonstrated 70.4% and 66.7% of sensitivities, and 96.9% and 92.3% of specificities, respectively. The XN-BF gating algorithm can be a feasible tool in hematology laboratories for prompt screening of malignant cells in various BF samples. PMID:29425230

  1. Strategies for cloud-top phase determination: differentiation between thin cirrus clouds and snow in manual (ground truth) analyses

    NASA Astrophysics Data System (ADS)

    Hutchison, Keith D.; Etherton, Brian J.; Topping, Phillip C.

    1996-12-01

    Quantitative assessments on the performance of automated cloud analysis algorithms require the creation of highly accurate, manual cloud, no cloud (CNC) images from multispectral meteorological satellite data. In general, the methodology to create ground truth analyses for the evaluation of cloud detection algorithms is relatively straightforward. However, when focus shifts toward quantifying the performance of automated cloud classification algorithms, the task of creating ground truth images becomes much more complicated since these CNC analyses must differentiate between water and ice cloud tops while ensuring that inaccuracies in automated cloud detection are not propagated into the results of the cloud classification algorithm. The process of creating these ground truth CNC analyses may become particularly difficult when little or no spectral signature is evident between a cloud and its background, as appears to be the case when thin cirrus is present over snow-covered surfaces. In this paper, procedures are described that enhance the researcher's ability to manually interpret and differentiate between thin cirrus clouds and snow-covered surfaces in daytime AVHRR imagery. The methodology uses data in up to six AVHRR spectral bands, including an additional band derived from the daytime 3.7 micron channel, which has proven invaluable for the manual discrimination between thin cirrus clouds and snow. It is concluded that while the 1.6 micron channel remains essential to differentiate between thin ice clouds and snow. However, this capability that may be lost if the 3.7 micron data switches to a nighttime-only transmission with the launch of future NOAA satellites.

  2. An efficient Cellular Potts Model algorithm that forbids cell fragmentation

    NASA Astrophysics Data System (ADS)

    Durand, Marc; Guesnet, Etienne

    2016-11-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.

  3. Advanced methods in NDE using machine learning approaches

    NASA Astrophysics Data System (ADS)

    Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank

    2018-04-01

    Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.

  4. Spatial Classification of Orchards and Vineyards with High Spatial Resolution Panchromatic Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warner, Timothy; Steinmaus, Karen L.

    2005-02-01

    New high resolution single spectral band imagery offers the capability to conduct image classifications based on spatial patterns in imagery. A classification algorithm based on autocorrelation patterns was developed to automatically extract orchards and vineyards from satellite imagery. The algorithm was tested on IKONOS imagery over Granger, WA, which resulted in a classification accuracy of 95%.

  5. EBIC: an evolutionary-based parallel biclustering algorithm for pattern discovery.

    PubMed

    Orzechowski, Patryk; Sipper, Moshe; Huang, Xiuzhen; Moore, Jason H

    2018-05-22

    Biclustering algorithms are commonly used for gene expression data analysis. However, accurate identification of meaningful structures is very challenging and state-of-the-art methods are incapable of discovering with high accuracy different patterns of high biological relevance. In this paper a novel biclustering algorithm based on evolutionary computation, a subfield of artificial intelligence (AI), is introduced. The method called EBIC aims to detect order-preserving patterns in complex data. EBIC is capable of discovering multiple complex patterns with unprecedented accuracy in real gene expression datasets. It is also one of the very few biclustering methods designed for parallel environments with multiple graphics processing units (GPUs). We demonstrate that EBIC greatly outperforms state-of-the-art biclustering methods, in terms of recovery and relevance, on both synthetic and genetic datasets. EBIC also yields results over 12 times faster than the most accurate reference algorithms. EBIC source code is available on GitHub at https://github.com/EpistasisLab/ebic. Correspondence and requests for materials should be addressed to P.O. (email: patryk.orzechowski@gmail.com) and J.H.M. (email: jhmoore@upenn.edu). Supplementary Data with results of analyses and additional information on the method is available at Bioinformatics online.

  6. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm

    PubMed Central

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-01-01

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen’s temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home. PMID:26007738

  7. Pattern-Recognition Algorithm for Locking Laser Frequency

    NASA Technical Reports Server (NTRS)

    Karayan, Vahag; Klipstein, William; Enzer, Daphna; Yates, Philip; Thompson, Robert; Wells, George

    2006-01-01

    A computer program serves as part of a feedback control system that locks the frequency of a laser to one of the spectral peaks of cesium atoms in an optical absorption cell. The system analyzes a saturation absorption spectrum to find a target peak and commands a laser-frequency-control circuit to minimize an error signal representing the difference between the laser frequency and the target peak. The program implements an algorithm consisting of the following steps: Acquire a saturation absorption signal while scanning the laser through the frequency range of interest. Condition the signal by use of convolution filtering. Detect peaks. Match the peaks in the signal to a pattern of known spectral peaks by use of a pattern-recognition algorithm. Add missing peaks. Tune the laser to the desired peak and thereafter lock onto this peak. Finding and locking onto the desired peak is a challenging problem, given that the saturation absorption signal includes noise and other spurious signal components; the problem is further complicated by nonlinearity and shifting of the voltage-to-frequency correspondence. The pattern-recognition algorithm, which is based on Hausdorff distance, is what enables the program to meet these challenges.

  8. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  9. Embedded 32-bit Differential Pulse Voltammetry (DPV) Technique for 3-electrode Cell Sensing

    NASA Astrophysics Data System (ADS)

    N, Aqmar N. Z.; Abdullah, W. F. H.; Zain, Z. M.; Rani, S.

    2018-03-01

    This paper addresses the development of differential pulse voltammetry (DPV) embedded algorithm using an ARM cortex processor with new developed potentiostat circuit design for in-situ 3-electrode cell sensing. This project is mainly to design a low cost potentiostat for the researchers in laboratories. It is required to develop an embedded algorithm for analytical technique to be used with the designed potentiostat. DPV is one of the most familiar pulse technique method used with 3-electrode cell sensing in chemical studies. Experiment was conducted on 10mM solution of Ferricyanide using the designed potentiostat and the developed DPV algorithm. As a result, the device can generate an excitation signal of DPV from 0.4V to 1.2V and produced a peaked voltammogram with relatively small error compared to the commercial potentiostat; which is only 6.25% difference in peak potential reading. The design of potentiostat device and its DPV algorithm is verified.

  10. Differential and relaxed image foresting transform for graph-cut segmentation of multiple 3D objects.

    PubMed

    Moya, Nikolas; Falcão, Alexandre X; Ciesielski, Krzysztof C; Udupa, Jayaram K

    2014-01-01

    Graph-cut algorithms have been extensively investigated for interactive binary segmentation, when the simultaneous delineation of multiple objects can save considerable user's time. We present an algorithm (named DRIFT) for 3D multiple object segmentation based on seed voxels and Differential Image Foresting Transforms (DIFTs) with relaxation. DRIFT stands behind efficient implementations of some state-of-the-art methods. The user can add/remove markers (seed voxels) along a sequence of executions of the DRIFT algorithm to improve segmentation. Its first execution takes linear time with the image's size, while the subsequent executions for corrections take sublinear time in practice. At each execution, DRIFT first runs the DIFT algorithm, then it applies diffusion filtering to smooth boundaries between objects (and background) and, finally, it corrects possible objects' disconnection occurrences with respect to their seeds. We evaluate DRIFT in 3D CT-images of the thorax for segmenting the arterial system, esophagus, left pleural cavity, right pleural cavity, trachea and bronchi, and the venous system.

  11. A Differential Evolution-Based Routing Algorithm for Environmental Monitoring Wireless Sensor Networks

    PubMed Central

    Li, Xiaofang; Xu, Lizhong; Wang, Huibin; Song, Jie; Yang, Simon X.

    2010-01-01

    The traditional Low Energy Adaptive Cluster Hierarchy (LEACH) routing protocol is a clustering-based protocol. The uneven selection of cluster heads results in premature death of cluster heads and premature blind nodes inside the clusters, thus reducing the overall lifetime of the network. With a full consideration of information on energy and distance distribution of neighboring nodes inside the clusters, this paper proposes a new routing algorithm based on differential evolution (DE) to improve the LEACH routing protocol. To meet the requirements of monitoring applications in outdoor environments such as the meteorological, hydrological and wetland ecological environments, the proposed algorithm uses the simple and fast search features of DE to optimize the multi-objective selection of cluster heads and prevent blind nodes for improved energy efficiency and system stability. Simulation results show that the proposed new LEACH routing algorithm has better performance, effectively extends the working lifetime of the system, and improves the quality of the wireless sensor networks. PMID:22219670

  12. Field programmable gate array based fuzzy neural signal processing system for differential diagnosis of QRS complex tachycardia and tachyarrhythmia in noisy ECG signals.

    PubMed

    Chowdhury, Shubhajit Roy

    2012-04-01

    The paper reports of a Field Programmable Gate Array (FPGA) based embedded system for detection of QRS complex in a noisy electrocardiogram (ECG) signal and thereafter differential diagnosis of tachycardia and tachyarrhythmia. The QRS complex has been detected after application of entropy measure of fuzziness to build a detection function of ECG signal, which has been previously filtered to remove power line interference and base line wander. Using the detected QRS complexes, differential diagnosis of tachycardia and tachyarrhythmia has been performed. The entire algorithm has been realized in hardware on an FPGA. Using the standard CSE ECG database, the algorithm performed highly effectively. The performance of the algorithm in respect of QRS detection with sensitivity (Se) of 99.74% and accuracy of 99.5% is achieved when tested using single channel ECG with entropy criteria. The performance of the QRS detection system has been compared and found to be better than most of the QRS detection systems available in literature. Using the system, 200 patients have been diagnosed with an accuracy of 98.5%.

  13. An interactive approach based on a discrete differential evolution algorithm for a class of integer bilevel programming problems

    NASA Astrophysics Data System (ADS)

    Li, Hong; Zhang, Li; Jiao, Yong-Chang

    2016-07-01

    This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.

  14. Implementation of pattern generation algorithm in forming Gilmore and Gomory model for two dimensional cutting stock problem

    NASA Astrophysics Data System (ADS)

    Octarina, Sisca; Radiana, Mutia; Bangun, Putra B. J.

    2018-01-01

    Two dimensional cutting stock problem (CSP) is a problem in determining the cutting pattern from a set of stock with standard length and width to fulfill the demand of items. Cutting patterns were determined in order to minimize the usage of stock. This research implemented pattern generation algorithm to formulate Gilmore and Gomory model of two dimensional CSP. The constraints of Gilmore and Gomory model was performed to assure the strips which cut in the first stage will be used in the second stage. Branch and Cut method was used to obtain the optimal solution. Based on the results, it found many patterns combination, if the optimal cutting patterns which correspond to the first stage were combined with the second stage.

  15. Dynamic changes in gene expression during human trophoblast differentiation.

    PubMed

    Handwerger, Stuart; Aronow, Bruce

    2003-01-01

    The genetic program that directs human placental differentiation is poorly understood. In a recent study, we used DNA microarray analyses to determine genes that are dynamically regulated during human placental development in an in vitro model system in which highly purified cytotrophoblast cells aggregate spontaneously and fuse to form a multinucleated syncytium that expresses placental lactogen, human chorionic gonadotropin, and other proteins normally expressed by fully differentiated syncytiotrophoblast cells. Of the 6918 genes present on the Incyte Human GEM V microarray that we analyzed over a 9-day period, 141 were induced and 256 were downregulated by more than 2-fold. The dynamically regulated genes fell into nine distinct kinetic patterns of induction or repression, as detected by the K-means algorithm. Classifying the genes according to functional characteristics, the regulated genes could be divided into six overall categories: cell and tissue structural dynamics, cell cycle and apoptosis, intercellular communication, metabolism, regulation of gene expression, and expressed sequence tags and function unknown. Gene expression changes within key functional categories were tightly coupled to the morphological changes that occurred during trophoblast differentiation. Within several key gene categories (e.g., cell and tissue structure), many genes were strongly activated, while others with related function were strongly repressed. These findings suggest that trophoblast differentiation is augmented by "categorical reprogramming" in which the ability of induced genes to function is enhanced by diminished synthesis of other genes within the same category. We also observed categorical reprogramming in human decidual fibroblasts decidualized in vitro in response to progesterone, estradiol, and cyclic AMP. While there was little overlap between genes that are dynamically regulated during trophoblast differentiation versus decidualization, many of the categories in which genes were strongly activated also contained genes whose expression was strongly diminished. Taken together, these findings point to a fundamental role for simultaneous induction and repression of mRNAs that encode functionally related proteins during the differentiation process.

  16. Registration of interferometric SAR images

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Vesecky, John F.; Zebker, Howard A.

    1992-01-01

    Interferometric synthetic aperture radar (INSAR) is a new way of performing topography mapping. Among the factors critical to mapping accuracy is the registration of the complex SAR images from repeated orbits. A new algorithm for registering interferometric SAR images is presented. A new figure of merit, the average fluctuation function of the phase difference image, is proposed to evaluate the fringe pattern quality. The process of adjusting the registration parameters according to the fringe pattern quality is optimized through a downhill simplex minimization algorithm. The results of applying the proposed algorithm to register two pairs of Seasat SAR images with a short baseline (75 m) and a long baseline (500 m) are shown. It is found that the average fluctuation function is a very stable measure of fringe pattern quality allowing very accurate registration.

  17. Development and testing of incident detection algorithms. Vol. 2, research methodology and detailed results.

    DOT National Transportation Integrated Search

    1976-04-01

    The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...

  18. GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm

    NASA Technical Reports Server (NTRS)

    Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.

    2003-01-01

    The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.

  19. Eliminating the zero spectrum in Fourier transform profilometry using empirical mode decomposition.

    PubMed

    Li, Sikun; Su, Xianyu; Chen, Wenjing; Xiang, Liqun

    2009-05-01

    Empirical mode decomposition is introduced into Fourier transform profilometry to extract the zero spectrum included in the deformed fringe pattern without the need for capturing two fringe patterns with pi phase difference. The fringe pattern is subsequently demodulated using a standard Fourier transform profilometry algorithm. With this method, the deformed fringe pattern is adaptively decomposed into a finite number of intrinsic mode functions that vary from high frequency to low frequency by means of an algorithm referred to as a sifting process. Then the zero spectrum is separated from the high-frequency components effectively. Experiments validate the feasibility of this method.

  20. Antagonism between the transcription factors NANOG and OTX2 specifies rostral or caudal cell fate during neural patterning transition.

    PubMed

    Su, Zhenghui; Zhang, Yanqi; Liao, Baojian; Zhong, Xiaofen; Chen, Xin; Wang, Haitao; Guo, Yiping; Shan, Yongli; Wang, Lihui; Pan, Guangjin

    2018-03-23

    During neurogenesis, neural patterning is a critical step during which neural progenitor cells differentiate into neurons with distinct functions. However, the molecular determinants that regulate neural patterning remain poorly understood. Here we optimized the "dual SMAD inhibition" method to specifically promote differentiation of human pluripotent stem cells (hPSCs) into forebrain and hindbrain neural progenitor cells along the rostral-caudal axis. We report that neural patterning determination occurs at the very early stage in this differentiation. Undifferentiated hPSCs expressed basal levels of the transcription factor orthodenticle homeobox 2 (OTX2) that dominantly drove hPSCs into the "default" rostral fate at the beginning of differentiation. Inhibition of glycogen synthase kinase 3β (GSK3β) through CHIR99021 application sustained transient expression of the transcription factor NANOG at early differentiation stages through Wnt signaling. Wnt signaling and NANOG antagonized OTX2 and, in the later stages of differentiation, switched the default rostral cell fate to the caudal one. Our findings have uncovered a mutual antagonism between NANOG and OTX2 underlying cell fate decisions during neural patterning, critical for the regulation of early neural development in humans. © 2018 by The American Society for Biochemistry and Molecular Biology, Inc.

  1. Further development of image processing algorithms to improve detectability of defects in Sonic IR NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2017-02-01

    Sonic Infrared imaging (SIR) technology is a relatively new NDE technique that has received significant acceptance in the NDE community. SIR NDE is a super-fast, wide range NDE method. The technology uses short pulses of ultrasonic excitation together with infrared imaging to detect defects in the structures under inspection. Defects become visible to the IR camera when the temperature in the crack vicinity increases due to various heating mechanisms in the specimen. Defect detection is highly affected by noise levels as well as mode patterns in the image. Mode patterns result from the superposition of sonic waves interfering within the specimen during the application of sound pulse. Mode patterns can be a serious concern, especially in composite structures. Mode patterns can either mimic real defects in the specimen, or alternatively, hide defects if they overlap. In last year's QNDE, we have presented algorithms to improve defects detectability in severe noise. In this paper, we will present our development of algorithms on defect extraction targeting specifically to mode patterns in SIR images.

  2. A splitting algorithm for the wavelet transform of cubic splines on a nonuniform grid

    NASA Astrophysics Data System (ADS)

    Sulaimanov, Z. M.; Shumilov, B. M.

    2017-10-01

    For cubic splines with nonuniform nodes, splitting with respect to the even and odd nodes is used to obtain a wavelet expansion algorithm in the form of the solution to a three-diagonal system of linear algebraic equations for the coefficients. Computations by hand are used to investigate the application of this algorithm for numerical differentiation. The results are illustrated by solving a prediction problem.

  3. An effective hybrid self-adapting differential evolution algorithm for the joint replenishment and location-inventory problem in a three-level supply chain.

    PubMed

    Wang, Lin; Qu, Hui; Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem.

  4. An Effective Hybrid Self-Adapting Differential Evolution Algorithm for the Joint Replenishment and Location-Inventory Problem in a Three-Level Supply Chain

    PubMed Central

    Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem. PMID:24453822

  5. Research reactor loading pattern optimization using estimation of distribution algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, S.; Ziver, K.; AMCG Group, RM Consultants, Abingdon

    2006-07-01

    A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K{sub eff}) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K{sub eff} with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristicmore » Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)« less

  6. Data Mining Citizen Science Results

    NASA Astrophysics Data System (ADS)

    Borne, K. D.

    2012-12-01

    Scientific discovery from big data is enabled through multiple channels, including data mining (through the application of machine learning algorithms) and human computation (commonly implemented through citizen science tasks). We will describe the results of new data mining experiments on the results from citizen science activities. Discovering patterns, trends, and anomalies in data are among the powerful contributions of citizen science. Establishing scientific algorithms that can subsequently re-discover the same types of patterns, trends, and anomalies in automatic data processing pipelines will ultimately result from the transformation of those human algorithms into computer algorithms, which can then be applied to much larger data collections. Scientific discovery from big data is thus greatly amplified through the marriage of data mining with citizen science.

  7. Far-field radiation patterns of aperture antennas by the Winograd Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Heisler, R.

    1978-01-01

    A more time-efficient algorithm for computing the discrete Fourier transform, the Winograd Fourier transform (WFT), is described. The WFT algorithm is compared with other transform algorithms. Results indicate that the WFT algorithm in antenna analysis appears to be a very successful application. Significant savings in cpu time will improve the computer turn around time and circumvent the need to resort to weekend runs.

  8. Real-time estimation of ionospheric delay using GPS measurements

    NASA Astrophysics Data System (ADS)

    Lin, Lao-Sheng

    1997-12-01

    When radio waves such as the GPS signals propagate through the ionosphere, they experience an extra time delay. The ionospheric delay can be eliminated (to the first order) through a linear combination of L1 and L2 observations from dual-frequency GPS receivers. Taking advantage of this dispersive principle, one or more dual- frequency GPS receivers can be used to determine a model of the ionospheric delay across a region of interest and, if implemented in real-time, can support single-frequency GPS positioning and navigation applications. The research objectives of this thesis were: (1) to develop algorithms to obtain accurate absolute Total Electron Content (TEC) estimates from dual-frequency GPS observables, and (2) to develop an algorithm to improve the accuracy of real-time ionosphere modelling. In order to fulfil these objectives, four algorithms have been proposed in this thesis. A 'multi-day multipath template technique' is proposed to mitigate the pseudo-range multipath effects at static GPS reference stations. This technique is based on the assumption that the multipath disturbance at a static station will be constant if the physical environment remains unchanged from day to day. The multipath template, either single-day or multi-day, can be generated from the previous days' GPS data. A 'real-time failure detection and repair algorithm' is proposed to detect and repair the GPS carrier phase 'failures', such as the occurrence of cycle slips. The proposed algorithm uses two procedures: (1) application of a statistical test on the state difference estimated from robust and conventional Kalman filters in order to detect and identify the carrier phase failure, and (2) application of a Kalman filter algorithm to repair the 'identified carrier phase failure'. A 'L1/L2 differential delay estimation algorithm' is proposed to estimate GPS satellite transmitter and receiver L1/L2 differential delays. This algorithm, based on the single-site modelling technique, is able to estimate the sum of the satellite and receiver L1/L2 differential delay for each tracked GPS satellite. A 'UNSW grid-based algorithm' is proposed to improve the accuracy of real-time ionosphere modelling. The proposed algorithm is similar to the conventional grid-based algorithm. However, two modifications were made to the algorithm: (1) an 'exponential function' is adopted as the weighting function, and (2) the 'grid-based ionosphere model' estimated from the previous day is used to predict the ionospheric delay ratios between the grid point and reference points. (Abstract shortened by UMI.)

  9. Discovering high-resolution patterns of differential DNA methylation that correlate with gene expression changes

    PubMed Central

    VanderKraats, Nathan D.; Hiken, Jeffrey F.; Decker, Keith F.; Edwards, John R.

    2013-01-01

    Methylation of the CpG-rich region (CpG island) overlapping a gene’s promoter is a generally accepted mechanism for silencing expression. While recent technological advances have enabled measurement of DNA methylation and expression changes genome-wide, only modest correlations between differential methylation at gene promoters and expression have been found. We hypothesize that stronger associations are not observed because existing analysis methods oversimplify their representation of the data and do not capture the diversity of existing methylation patterns. Recently, other patterns such as CpG island shore methylation and long partially hypomethylated domains have also been linked with gene silencing. Here, we detail a new approach for discovering differential methylation patterns associated with expression change using genome-wide high-resolution methylation data: we represent differential methylation as an interpolated curve, or signature, and then identify groups of genes with similarly shaped signatures and corresponding expression changes. Our technique uncovers a diverse set of patterns that are conserved across embryonic stem cell and cancer data sets. Overall, we find strong associations between these methylation patterns and expression. We further show that an extension of our method also outperforms other approaches by generating a longer list of genes with higher quality associations between differential methylation and expression. PMID:23748561

  10. Optimization of algorithm of coding of genetic information of Chlamydia

    NASA Astrophysics Data System (ADS)

    Feodorova, Valentina A.; Ulyanov, Sergey S.; Zaytsev, Sergey S.; Saltykov, Yury V.; Ulianova, Onega V.

    2018-04-01

    New method of coding of genetic information using coherent optical fields is developed. Universal technique of transformation of nucleotide sequences of bacterial gene into laser speckle pattern is suggested. Reference speckle patterns of the nucleotide sequences of omp1 gene of typical wild strains of Chlamydia trachomatis of genovars D, E, F, G, J and K and Chlamydia psittaci serovar I as well are generated. Algorithm of coding of gene information into speckle pattern is optimized. Fully developed speckles with Gaussian statistics for gene-based speckles have been used as criterion of optimization.

  11. The Seasat scanning multichannel microwave radiometer /SMMR/: Antenna pattern corrections - Development and implementation

    NASA Technical Reports Server (NTRS)

    Njoku, E. G.; Christensen, E. J.; Cofield, R. E.

    1980-01-01

    The antenna temperatures measured by the Seasat scanning multichannel microwave radiometer (SMMR) differ from the true brightness temperatures of the observed scene due to antenna pattern effects, principally from antenna sidelobe contributions and cross-polarization coupling. To provide accurate brightness temperatures convenient for geophysical parameter retrievals the antenna temperatures are processed through a series of stages, collectively known as the antenna pattern correction (APC) algorithm. A description of the development and implementation of the APC algorithm is given, along with an error analysis of the resulting brightness temperatures.

  12. Third-dimension information retrieval from a single convergent-beam transmission electron diffraction pattern using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Pennington, Robert S.; Van den Broek, Wouter; Koch, Christoph T.

    2014-05-01

    We have reconstructed third-dimension specimen information from convergent-beam electron diffraction (CBED) patterns simulated using the stacked-Bloch-wave method. By reformulating the stacked-Bloch-wave formalism as an artificial neural network and optimizing with resilient back propagation, we demonstrate specimen orientation reconstructions with depth resolutions down to 5 nm. To show our algorithm's ability to analyze realistic data, we also discuss and demonstrate our algorithm reconstructing from noisy data and using a limited number of CBED disks. Applicability of this reconstruction algorithm to other specimen parameters is discussed.

  13. Negative Selection Algorithm for Aircraft Fault Detection

    NASA Technical Reports Server (NTRS)

    Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.

    2004-01-01

    We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.

  14. Tracking Small Artists

    NASA Astrophysics Data System (ADS)

    Russell, James C.; Klette, Reinhard; Chen, Chia-Yen

    Tracks of small animals are important in environmental surveillance, where pattern recognition algorithms allow species identification of the individuals creating tracks. These individuals can also be seen as artists, presented in their natural environments with a canvas upon which they can make prints. We present tracks of small mammals and reptiles which have been collected for identification purposes, and re-interpret them from an esthetic point of view. We re-classify these tracks not by their geometric qualities as pattern recognition algorithms would, but through interpreting the 'artist', their brush strokes and intensity. We describe the algorithms used to enhance and present the work of the 'artists'.

  15. Blind color isolation for color-channel-based fringe pattern profilometry using digital projection

    NASA Astrophysics Data System (ADS)

    Hu, Yingsong; Xi, Jiangtao; Chicharo, Joe; Yang, Zongkai

    2007-08-01

    We present an algorithm for estimating the color demixing matrix based on the color fringe patterns captured from the reference plane or the surface of the object. The advantage of this algorithm is that it is a blind approach to calculating the demixing matrix in the sense that no extra images are required for color calibration before performing profile measurement. Simulation and experimental results convince us that the proposed algorithm can significantly reduce the influence of the color cross talk and at the same time improve the measurement accuracy of the color-channel-based phase-shifting profilometry.

  16. Study on store-space assignment based on logistic AGV in e-commerce goods to person picking pattern

    NASA Astrophysics Data System (ADS)

    Xu, Lijuan; Zhu, Jie

    2017-10-01

    This paper studied on the store-space assignment based on logistic AGV in E-commerce goods to person picking pattern, and established the store-space assignment model based on the lowest picking cost, and design for store-space assignment algorithm after the cluster analysis based on similarity coefficient. And then through the example analysis, compared the picking cost between store-space assignment algorithm this paper design and according to item number and storage according to ABC classification allocation, and verified the effectiveness of the design of the store-space assignment algorithm.

  17. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns

    PubMed Central

    Matsubara, Takashi

    2017-01-01

    Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning. PMID:29209191

  18. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns.

    PubMed

    Matsubara, Takashi

    2017-01-01

    Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.

  19. Seasonal and Inter-Annual Patterns of Chlorophyll and Phytoplankton Community Structure in Monterey Bay, CA Derived from AVIRIS Data During the 2013-2015 HyspIRI Airborne Campaign

    NASA Astrophysics Data System (ADS)

    Palacios, S. L.; Thompson, D. R.; Kudela, R. M.; Negrey, K.; Guild, L. S.; Gao, B. C.; Green, R. O.; Torres-Perez, J. L.

    2016-02-01

    There is a need in the ocean color community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand ocean biodiversity, track energy flow through ecosystems, and identify and monitor for harmful algal blooms. Imaging spectrometer measurements enable the use of sophisticated spectroscopic algorithms for applications such as differentiating among coral species and discriminating phytoplankton taxa. These advanced algorithms rely on the fine scale, subtle spectral shape of the atmospherically corrected remote sensing reflectance (Rrs) spectrum of the ocean surface. Consequently, these algorithms are sensitive to inaccuracies in the retrieved Rrs spectrum that may be related to the presence of nearby clouds, inadequate sensor calibration, low sensor signal-to-noise ratio, glint correction, and atmospheric correction. For the HyspIRI Airborne Campaign, flight planning considered optimal weather conditions to avoid flights with significant cloud/fog cover. Although best suited for terrestrial targets, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has enough signal for some coastal chlorophyll algorithms and meets sufficient calibration requirements for most channels. The coastal marine environment has special atmospheric correction needs due to error introduced by aerosols and terrestrially sourced atmospheric dust and riverine sediment plumes. For this HyspIRI campaign, careful attention has been given to the correction of AVIRIS imagery of the Monterey Bay to optimize ocean Rrs retrievals to estimate chlorophyll (OC3) and phytoplankton functional type (PHYDOTax) data products. This new correction method has been applied to several image collection dates during two oceanographic seasons in 2013 and 2014. These two periods are dominated by either diatom blooms or red tides. Results to be presented include chlorophyll and phytoplankton community structure and in-water validation data for these dates during the two seasons.

  20. 3D patterned stem cell differentiation using thermo-responsive methylcellulose hydrogel molds.

    PubMed

    Lee, Wonjae; Park, Jon

    2016-07-06

    Tissue-specific patterned stem cell differentiation serves as the basis for the development, remodeling, and regeneration of the multicellular structure of the native tissues. We herein proposed a cytocompatible 3D casting process to recapitulate this patterned stem cell differentiation for reconstructing multicellular tissues in vitro. We first reconstituted the 2D culture conditions for stem cell fate control within 3D hydrogel by incorporating the sets of the diffusible signal molecules delivered through drug-releasing microparticles. Then, utilizing thermo-responsivity of methylcellulose (MC), we developed a cytocompatible casting process to mold these hydrogels into specific 3D configurations, generating the targeted spatial gradients of diffusible signal molecules. The liquid phase of the MC solution was viscous enough to adopt the shapes of 3D impression patterns, while the gelated MC served as a reliable mold for patterning the hydrogel prepolymers. When these patterned hydrogels were integrated together, the stem cells in each hydrogel distinctly differentiated toward individually defined fates, resulting in the formation of the multicellular tissue structure bearing the very structural integrity and characteristics as seen in vascularized bones and osteochondral tissues.

  1. 3D patterned stem cell differentiation using thermo-responsive methylcellulose hydrogel molds

    NASA Astrophysics Data System (ADS)

    Lee, Wonjae; Park, Jon

    2016-07-01

    Tissue-specific patterned stem cell differentiation serves as the basis for the development, remodeling, and regeneration of the multicellular structure of the native tissues. We herein proposed a cytocompatible 3D casting process to recapitulate this patterned stem cell differentiation for reconstructing multicellular tissues in vitro. We first reconstituted the 2D culture conditions for stem cell fate control within 3D hydrogel by incorporating the sets of the diffusible signal molecules delivered through drug-releasing microparticles. Then, utilizing thermo-responsivity of methylcellulose (MC), we developed a cytocompatible casting process to mold these hydrogels into specific 3D configurations, generating the targeted spatial gradients of diffusible signal molecules. The liquid phase of the MC solution was viscous enough to adopt the shapes of 3D impression patterns, while the gelated MC served as a reliable mold for patterning the hydrogel prepolymers. When these patterned hydrogels were integrated together, the stem cells in each hydrogel distinctly differentiated toward individually defined fates, resulting in the formation of the multicellular tissue structure bearing the very structural integrity and characteristics as seen in vascularized bones and osteochondral tissues.

  2. 3D patterned stem cell differentiation using thermo-responsive methylcellulose hydrogel molds

    PubMed Central

    Lee, Wonjae; Park, Jon

    2016-01-01

    Tissue-specific patterned stem cell differentiation serves as the basis for the development, remodeling, and regeneration of the multicellular structure of the native tissues. We herein proposed a cytocompatible 3D casting process to recapitulate this patterned stem cell differentiation for reconstructing multicellular tissues in vitro. We first reconstituted the 2D culture conditions for stem cell fate control within 3D hydrogel by incorporating the sets of the diffusible signal molecules delivered through drug-releasing microparticles. Then, utilizing thermo-responsivity of methylcellulose (MC), we developed a cytocompatible casting process to mold these hydrogels into specific 3D configurations, generating the targeted spatial gradients of diffusible signal molecules. The liquid phase of the MC solution was viscous enough to adopt the shapes of 3D impression patterns, while the gelated MC served as a reliable mold for patterning the hydrogel prepolymers. When these patterned hydrogels were integrated together, the stem cells in each hydrogel distinctly differentiated toward individually defined fates, resulting in the formation of the multicellular tissue structure bearing the very structural integrity and characteristics as seen in vascularized bones and osteochondral tissues. PMID:27381562

  3. PWFQ: a priority-based weighted fair queueing algorithm for the downstream transmission of EPON

    NASA Astrophysics Data System (ADS)

    Xu, Sunjuan; Ye, Jiajun; Zou, Junni

    2005-11-01

    In the downstream direction of EPON, all ethernet frames share one downlink channel from the OLT to destination ONUs. To guarantee differentiated services, a scheduling algorithm is needed to solve the link-sharing issue. In this paper, we first review the classical WFQ algorithm and point out the shortcomings existing in the fair queueing principle of WFQ algorithm for EPON. Then we propose a novel scheduling algorithm called Priority-based WFQ (PWFQ) algorithm which distributes bandwidth based on priority. PWFQ algorithm can guarantee the quality of real-time services whether under light load or under heavy load. Simulation results also show that PWFQ algorithm not only can improve delay performance of real-time services, but can also meet the worst-case delay bound requirements.

  4. Reflectivity retrieval in a networked radar environment

    NASA Astrophysics Data System (ADS)

    Lim, Sanghun

    Monitoring of precipitation using a high-frequency radar system such as X-band is becoming increasingly popular due to its lower cost compared to its counterpart at S-band. Networks of meteorological radar systems at higher frequencies are being pursued for targeted applications such as coverage over a city or a small basin. However, at higher frequencies, the impact of attenuation due to precipitation needs to be resolved for successful implementation. In this research, new attenuation correction algorithms are introduced to compensate the attenuation impact due to rain medium. In order to design X-band radar systems as well as evaluate algorithm development, it is useful to have simultaneous X-band observation with and without the impact of path attenuation. One way to obtain that data set is through theoretical models. Methodologies for generating realistic range profiles of radar variables at attenuating frequencies such as X-band for rain medium are presented here. Fundamental microphysical properties of precipitation, namely size and shape distribution information, are used to generate realistic profiles of X-band starting with S-band observations. Conditioning the simulation from S-band radar measurements maintains the natural distribution of microphysical parameters associated with rainfall. In this research, data taken by the CSU-CHILL radar and the National Center for Atmospheric Research S-POL radar are used to simulate X-band radar variables. Three procedures to simulate the radar variables at X-band and sample applications are presented. A new attenuation correction algorithm based on profiles of reflectivity, differential reflectivity, and differential propagation phase shift is presented. A solution for specific attenuation retrieval in rain medium is proposed that solves the integral equations for reflectivity and differential reflectivity with cumulative differential propagation phase shift constraint. The conventional rain profiling algorithms that connect reflectivity and specific attenuation can retrieve specific attenuation values along the radar path assuming a constant intercept parameter of the normalized drop size distribution. However, in convective storms, the drop size distribution parameters can have significant variation along the path. In this research, a dual-polarization rain profiling algorithm for horizontal-looking radars incorporating reflectivity as well as differential reflectivity profiles is developed. The dual-polarization rain profiling algorithm has been evaluated with X-band radar observations simulated from drop size distribution derived from high-resolution S-band measurements collected by the CSU-CHILL radar. The analysis shows that the dual-polarization rain profiling algorithm provides significant improvement over the current algorithms. A methodology for reflectivity and attenuation retrieval for rain medium in a networked radar environment is described. Electromagnetic waves backscattered from a common volume in networked radar systems are attenuated differently along the different paths. A solution for the specific attenuation distribution is proposed by solving the integral equation for reflectivity. The set of governing integral equations describing the backscatter and propagation of common resolution volume are solved simultaneously with constraints on total path attenuation. The proposed algorithm is evaluated based on simulated X-band radar observations synthesized from S-band measurements collected by the CSU-CHILL radar. Retrieved reflectivity and specific attenuation using the proposed method show good agreement with simulated reflectivity and specific attenuation.

  5. Identifying Patterns in the Weather of Europe for Source Term Estimation

    NASA Astrophysics Data System (ADS)

    Klampanos, Iraklis; Pappas, Charalambos; Andronopoulos, Spyros; Davvetas, Athanasios; Ikonomopoulos, Andreas; Karkaletsis, Vangelis

    2017-04-01

    During emergencies that involve the release of hazardous substances into the atmosphere the potential health effects on the human population and the environment are of primary concern. Such events have occurred in the past, most notably involving radioactive and toxic substances. Examples of radioactive release events include the Chernobyl accident in 1986, as well as the more recent Fukushima Daiichi accident in 2011. Often, the release of dangerous substances in the atmosphere is detected at locations different from the release origin. The objective of this work is the rapid estimation of such unknown sources shortly after the detection of dangerous substances in the atmosphere, with an initial focus on nuclear or radiological releases. Typically, after the detection of a radioactive substance in the atmosphere indicating the occurrence of an unknown release, the source location is estimated via inverse modelling. However, depending on factors such as the spatial resolution desired, traditional inverse modelling can be computationally time-consuming. This is especially true for cases where complex topography and weather conditions are involved and can therefore be problematic when timing is critical. Making use of machine learning techniques and the Big Data Europe platform1, our approach moves the bulk of the computation before any such event taking place, therefore allowing for rapid initial, albeit rougher, estimations regarding the source location. Our proposed approach is based on the automatic identification of weather patterns within the European continent. Identifying weather patterns has long been an active research field. Our case is differentiated by the fact that it focuses on plume dispersion patterns and these meteorological variables that affect dispersion the most. For a small set of recurrent weather patterns, we simulate hypothetical radioactive releases from a pre-known set of nuclear reactor locations and for different substance and temporal parameters, using the Java flavour of the Euratom-supported funded RODOS (Real-time On-line DecisiOn Support) system2 for off-site emergency management after nuclear accidents. Once dispersions have been pre-computed, and immediately after a detected release, the currently observed weather can be matched to the derived weather classes. Since each weather class corresponds to a different plume dispersion pattern, the closest classes to an unseen weather sample, say the current weather, are the most likely to lead us to the release origin. In addressing the above problem, we make use of multiple years of weather reanalysis data from NCAR's version3 of ECMWF's ERA-Interim4. To derive useful weather classes, we evaluate several algorithms, ranging from straightforward unsupervised clustering to more complex methods, including relevant neural-network algorithms, on multiple variables. Variables and feature sets, clustering algorithms and evaluation approaches are all dealt with and presented experimentally. The Big Data Europe platform allows for the implementation and execution of the above tasks in the cloud, in a scalable, robust and efficient way.

  6. Genetic algorithm for investigating flight MH370 in Indian Ocean using remotely sensed data

    NASA Astrophysics Data System (ADS)

    Marghany, Maged; Mansor, Shattri; Shariff, Abdul Rashid Bin Mohamed

    2016-06-01

    This study utilized Genetic algorithm (GA) for automatic detection and simulation trajectory movements of flight MH370 debris. In doing so, the Ocean Surface Topography Mission(OSTM) on the Jason- 2 satellite have been used within 1 and half year covers data to simulate the pattern of Flight MH370 debris movements across the southern Indian Ocean. Further, multi-objectives evolutionary algorithm also used to discriminate uncertainty of flight MH370 imagined and detection. The study shows that the ocean surface current speed is 0.5 m/s. This current patterns have developed a large anticlockwise gyre over a water depth of 8,000 m. The multi-objectives evolutionary algorithm suggested that objects are existed on satellite data are not flight MH370 debris. In addition, multiobjectives evolutionary algorithm suggested that the difficulties to acquire the exact location of flight MH370 due to complicated hydrodynamic movements across the southern Indian Ocean.

  7. 3D algebraic iterative reconstruction for cone-beam x-ray differential phase-contrast computed tomography.

    PubMed

    Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz

    2015-01-01

    Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.

  8. Performance evaluation of the multiple root node approach to the Rete pattern matcher for production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Gaudiot, J.-L.

    1991-12-31

    Much effort has been expanded on special architectures and algorithms dedicated to efficient processing of the pattern matching step of production systems. In this paper, the authors investigate the possible improvement on the Rete pattern matcher for production systems. Inefficiencies in the Rete match algorithm have been identified, based on which they introduce a pattern matcher with multiple root nodes. A complete implementation of the multiple root node-based production system interpreter is presented to investigate its relative algorithmic behavior over the Rete-based Ops5 production system interpreter. Benchmark production system programs are executed (not simulated) on a sequential machine Sun 4/490more » by using both interpreters and various experimental results are presented. Their investigation indicates that the multiple root node-based production system interpreter would give a maximum of up to 6-fold improvement over the Lisp implementation of the Rete-based Ops5 for the match step.« less

  9. Moving Object Detection Using a Parallax Shift Vector Algorithm

    NASA Astrophysics Data System (ADS)

    Gural, Peter S.; Otto, Paul R.; Tedesco, Edward F.

    2018-07-01

    There are various algorithms currently in use to detect asteroids from ground-based observatories, but they are generally restricted to linear or mildly curved movement of the target object across the field of view. Space-based sensors in high inclination, low Earth orbits can induce significant parallax in a collected sequence of images, especially for objects at the typical distances of asteroids in the inner solar system. This results in a highly nonlinear motion pattern of the asteroid across the sensor, which requires a more sophisticated search pattern for detection processing. Both the classical pattern matching used in ground-based asteroid search and the more sensitive matched filtering and synthetic tracking techniques, can be adapted to account for highly complex parallax motion. A new shift vector generation methodology is discussed along with its impacts on commonly used detection algorithms, processing load, and responsiveness to asteroid track reporting. The matched filter, template generator, and pattern matcher source code for the software described herein are available via GitHub.

  10. Indexing amyloid peptide diffraction from serial femtosecond crystallography: New algorithms for sparse patterns

    DOE PAGES

    Brewster, Aaron S.; Sawaya, Michael R.; Rodriguez, Jose; ...

    2015-01-23

    Still diffraction patterns from peptide nanocrystals with small unit cells are challenging to index using conventional methods owing to the limited number of spots and the lack of crystal orientation information for individual images. New indexing algorithms have been developed as part of the Computational Crystallography Toolbox( cctbx) to overcome these challenges. Accurate unit-cell information derived from an aggregate data set from thousands of diffraction patterns can be used to determine a crystal orientation matrix for individual images with as few as five reflections. These algorithms are potentially applicable not only to amyloid peptides but also to any set ofmore » diffraction patterns with sparse properties, such as low-resolution virus structures or high-throughput screening of still images captured by raster-scanning at synchrotron sources. As a proof of concept for this technique, successful integration of X-ray free-electron laser (XFEL) data to 2.5 Å resolution for the amyloid segment GNNQQNY from the Sup35 yeast prion is presented.« less

  11. An absolute interval scale of order for point patterns

    PubMed Central

    Protonotarios, Emmanouil D.; Baum, Buzz; Johnston, Alan; Hunter, Ginger L.; Griffin, Lewis D.

    2014-01-01

    Human observers readily make judgements about the degree of order in planar arrangements of points (point patterns). Here, based on pairwise ranking of 20 point patterns by degree of order, we have been able to show that judgements of order are highly consistent across individuals and the dimension of order has an interval scale structure spanning roughly 10 just-notable-differences (jnd) between disorder and order. We describe a geometric algorithm that estimates order to an accuracy of half a jnd by quantifying the variability of the size and shape of spaces between points. The algorithm is 70% more accurate than the best available measures. By anchoring the output of the algorithm so that Poisson point processes score on average 0, perfect lattices score 10 and unit steps correspond closely to jnds, we construct an absolute interval scale of order. We demonstrate its utility in biology by using this scale to quantify order during the development of the pattern of bristles on the dorsal thorax of the fruit fly. PMID:25079866

  12. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.

  13. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  14. Continuous Adaptive Population Reduction (CAPR) for Differential Evolution Optimization.

    PubMed

    Wong, Ieong; Liu, Wenjia; Ho, Chih-Ming; Ding, Xianting

    2017-06-01

    Differential evolution (DE) has been applied extensively in drug combination optimization studies in the past decade. It allows for identification of desired drug combinations with minimal experimental effort. This article proposes an adaptive population-sizing method for the DE algorithm. Our new method presents improvements in terms of efficiency and convergence over the original DE algorithm and constant stepwise population reduction-based DE algorithm, which would lead to a reduced number of cells and animals required to identify an optimal drug combination. The method continuously adjusts the reduction of the population size in accordance with the stage of the optimization process. Our adaptive scheme limits the population reduction to occur only at the exploitation stage. We believe that continuously adjusting for a more effective population size during the evolutionary process is the major reason for the significant improvement in the convergence speed of the DE algorithm. The performance of the method is evaluated through a set of unimodal and multimodal benchmark functions. In combining with self-adaptive schemes for mutation and crossover constants, this adaptive population reduction method can help shed light on the future direction of a completely parameter tune-free self-adaptive DE algorithm.

  15. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  16. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  17. Finite time convergent learning law for continuous neural networks.

    PubMed

    Chairez, Isaac

    2014-02-01

    This paper addresses the design of a discontinuous finite time convergent learning law for neural networks with continuous dynamics. The neural network was used here to obtain a non-parametric model for uncertain systems described by a set of ordinary differential equations. The source of uncertainties was the presence of some external perturbations and poor knowledge of the nonlinear function describing the system dynamics. A new adaptive algorithm based on discontinuous algorithms was used to adjust the weights of the neural network. The adaptive algorithm was derived by means of a non-standard Lyapunov function that is lower semi-continuous and differentiable in almost the whole space. A compensator term was included in the identifier to reject some specific perturbations using a nonlinear robust algorithm. Two numerical examples demonstrated the improvements achieved by the learning algorithm introduced in this paper compared to classical schemes with continuous learning methods. The first one dealt with a benchmark problem used in the paper to explain how the discontinuous learning law works. The second one used the methane production model to show the benefits in engineering applications of the learning law proposed in this paper. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Shear wave speed recovery using moving interference patterns obtained in sonoelastography experiments.

    PubMed

    McLaughlin, Joyce; Renzi, Daniel; Parker, Kevin; Wu, Zhe

    2007-04-01

    Two new experiments were created to characterize the elasticity of soft tissue using sonoelastography. In both experiments the spectral variance image displayed on a GE LOGIC 700 ultrasound machine shows a moving interference pattern that travels at a very small fraction of the shear wave speed. The goal of this paper is to devise and test algorithms to calculate the speed of the moving interference pattern using the arrival times of these same patterns. A geometric optics expansion is used to obtain Eikonal equations relating the moving interference pattern arrival times to the moving interference pattern speed and then to the shear wave speed. A cross-correlation procedure is employed to find the arrival times; and an inverse Eikonal solver called the level curve method computes the speed of the interference pattern. The algorithm is tested on data from a phantom experiment performed at the University of Rochester Center for Biomedical Ultrasound.

  19. Employing canopy hyperspectral narrowband data and random forest algorithm to differentiate palmer amaranth from colored cotton

    USDA-ARS?s Scientific Manuscript database

    Palmer amaranth (Amaranthus palmeri S. Wats.) invasion negatively impacts cotton (Gossypium hirsutum L.) production systems throughout the United States. The objective of this study was to evaluate canopy hyperspectral narrowband data as input into the random forest machine learning algorithm to dis...

  20. An efficient and robust algorithm for two dimensional time dependent incompressible Navier-Stokes equations: High Reynolds number flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1991-01-01

    An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.

  1. Seismic Investigation of Magmatic Unrest Beneath Mammoth Mountain, California Using Waveform Cross-Correlation

    NASA Astrophysics Data System (ADS)

    Lin, G.

    2012-12-01

    We investigate the seismic and magmatic activity during an 11-month-long seismic swarm between 1989 and 1990 beneath Mammoth Mountain (MM) at the southwest rim of Long Valley caldera in eastern California. This swarm is believed to be results of a shallow intrusion of magma beneath MM. It was followed by the emissions of carbon dioxide (CO2) gas, which caused tree-killings in 1990 and posed a significant human health risk around MM. In this study, we develop a new three-dimensional (3-D) P-wave velocity model using first-arrival picks by applying the simul2000 tomographic algorithm. The resulting 3-D model is correlated with the surface geological features at shallow depths and is used to constrain absolute earthquake locations for all local events in our study. We compute both P- and S-wave differential times using a time-domain waveform cross-correlation method. We then apply similar event cluster analysis and differential time location approach to further improve relative event location accuracy. A dramatic sharpening of seismicity pattern is obtained after these processes. The estimated uncertainties are a few meters in relative location and ~100 meters in absolute location. We also apply a high-resolution approach to estimate in situ near-source Vp/Vs ratios using differential times from waveform cross-correlation. This method provides highly precise results because cross-correlation can measure differential times to within a few milliseconds and can achieve a precision of 0.001 in estimated Vp/Vs ratio. Our results show a circular ring-like seismicity pattern with a diameter of 2 km between 3 and 8 km depth. These events are distributed in an anomalous body with low Vp and high Vp/Vs, which may be caused by over-pressured magmatically derived fluids. At shallower depths, we observe very low Vp/Vs anomalies beneath MM from the surface to 1 km below sea level whose locations agree with the proposed CO2 reservoir in previous studies. The systematic spatial and temporal migration of seismicity suggests fluid involvement in the seismic swarm. Our results will provide more robust constraints on the crustal structure and volcanic processes beneath Mammoth Mountain.

  2. Rash Decisions: An Approach to Dangerous Rashes Based on Morphology.

    PubMed

    Santistevan, Jamie; Long, Brit; Koyfman, Alex

    2017-04-01

    Rash is a common complaint in the emergency department. Many causes of rash are benign; however, some patients may have a life-threatening diagnosis. This review will present an algorithmic approach to rashes, focusing on life-threatening causes of rash in each category. Rash is common, with a wide range of etiologies. The differential is broad, consisting of many conditions that are self-resolving. However, several conditions associated with rash are life threatening. Several keys can be utilized to rapidly diagnose and manage these deadly rashes. Thorough history and physical examination, followed by consideration of red flags, are essential. This review focuses on four broad categories based on visual and tactile characteristic patterns of rashes: petechial/purpuric, erythematous, maculopapular, and vesiculobullous. Rashes in each morphologic group will be further categorized based on clinical features such as the presence or absence of fever and distribution of skin lesions. Rashes can be divided into petechial/purpuric, erythematous, maculopapular, and vesiculobullous. After this differentiation, the presence of fever and systemic signs of illness should be assessed. Through the breakdown of rashes into these classes, emergency providers can ensure deadly conditions are considered. Published by Elsevier Inc.

  3. The genetic structure of a relict population of wood frogs

    USGS Publications Warehouse

    Scherer, Rick; Muths, Erin; Noon, Barry; Oyler-McCance, Sara

    2012-01-01

    Habitat fragmentation and the associated reduction in connectivity between habitat patches are commonly cited causes of genetic differentiation and reduced genetic variation in animal populations. We used eight microsatellite markers to investigate genetic structure and levels of genetic diversity in a relict population of wood frogs (Lithobates sylvatica) in Rocky Mountain National Park, Colorado, where recent disturbances have altered hydrologic processes and fragmented amphibian habitat. We also estimated migration rates among subpopulations, tested for a pattern of isolation-by-distance, and looked for evidence of a recent population bottleneck. The results from the clustering algorithm in Program STRUCTURE indicated the population is partitioned into two genetic clusters (subpopulations), and this result was further supported by factorial component analysis. In addition, an estimate of FST (FST = 0.0675, P value \\0.0001) supported the genetic differentiation of the two clusters. Estimates of migration rates among the two subpopulations were low, as were estimates of genetic variability. Conservation of the population of wood frogs may be improved by increasing the spatial distribution of the population and improving gene flow between the subpopulations. Construction or restoration of wetlands in the landscape between the clusters has the potential to address each of these objectives.

  4. Optimal Refueling Pattern Search for a CANDU Reactor Using a Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quang Binh, DO; Gyuhong, ROH; Hangbok, CHOI

    2006-07-01

    This paper presents the results from the application of genetic algorithms to a refueling optimization of a Canada deuterium uranium (CANDU) reactor. This work aims at making a mathematical model of the refueling optimization problem including the objective function and constraints and developing a method based on genetic algorithms to solve the problem. The model of the optimization problem and the proposed method comply with the key features of the refueling strategy of the CANDU reactor which adopts an on-power refueling operation. In this study, a genetic algorithm combined with an elitism strategy was used to automatically search for themore » refueling patterns. The objective of the optimization was to maximize the discharge burn-up of the refueling bundles, minimize the maximum channel power, or minimize the maximum change in the zone controller unit (ZCU) water levels. A combination of these objectives was also investigated. The constraints include the discharge burn-up, maximum channel power, maximum bundle power, channel power peaking factor and the ZCU water level. A refueling pattern that represents the refueling rate and channels was coded by a one-dimensional binary chromosome, which is a string of binary numbers 0 and 1. A computer program was developed in FORTRAN 90 running on an HP 9000 workstation to conduct the search for the optimal refueling patterns for a CANDU reactor at the equilibrium state. The results showed that it was possible to apply genetic algorithms to automatically search for the refueling channels of the CANDU reactor. The optimal refueling patterns were compared with the solutions obtained from the AUTOREFUEL program and the results were consistent with each other. (authors)« less

  5. rasbhari: Optimizing Spaced Seeds for Database Searching, Read Mapping and Alignment-Free Sequence Comparison.

    PubMed

    Hahn, Lars; Leimeister, Chris-André; Ounit, Rachid; Lonardi, Stefano; Morgenstern, Burkhard

    2016-10-01

    Many algorithms for sequence analysis rely on word matching or word statistics. Often, these approaches can be improved if binary patterns representing match and don't-care positions are used as a filter, such that only those positions of words are considered that correspond to the match positions of the patterns. The performance of these approaches, however, depends on the underlying patterns. Herein, we show that the overlap complexity of a pattern set that was introduced by Ilie and Ilie is closely related to the variance of the number of matches between two evolutionarily related sequences with respect to this pattern set. We propose a modified hill-climbing algorithm to optimize pattern sets for database searching, read mapping and alignment-free sequence comparison of nucleic-acid sequences; our implementation of this algorithm is called rasbhari. Depending on the application at hand, rasbhari can either minimize the overlap complexity of pattern sets, maximize their sensitivity in database searching or minimize the variance of the number of pattern-based matches in alignment-free sequence comparison. We show that, for database searching, rasbhari generates pattern sets with slightly higher sensitivity than existing approaches. In our Spaced Words approach to alignment-free sequence comparison, pattern sets calculated with rasbhari led to more accurate estimates of phylogenetic distances than the randomly generated pattern sets that we previously used. Finally, we used rasbhari to generate patterns for short read classification with CLARK-S. Here too, the sensitivity of the results could be improved, compared to the default patterns of the program. We integrated rasbhari into Spaced Words; the source code of rasbhari is freely available at http://rasbhari.gobics.de/.

  6. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  7. The Mucciardi-Gose Clustering Algorithm and Its Applications in Automatic Pattern Recognition.

    DTIC Science & Technology

    A procedure known as the Mucciardi- Gose clustering algorithm, CLUSTR, for determining the geometrical or statistical relationships among groups of N...discussion of clustering algorithms is given; the particular advantages of the Mucciardi- Gose procedure are described. The mathematical basis for, and the

  8. SciCADE 95: International conference on scientific computation and differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-12-31

    This report consists of abstracts from the conference. Topics include algorithms, computer codes, and numerical solutions for differential equations. Linear and nonlinear as well as boundary-value and initial-value problems are covered. Various applications of these problems are also included.

  9. Application of differential evolution algorithm on self-potential data.

    PubMed

    Li, Xiangtao; Yin, Minghao

    2012-01-01

    Differential evolution (DE) is a population based evolutionary algorithm widely used for solving multidimensional global optimization problems over continuous spaces, and has been successfully used to solve several kinds of problems. In this paper, differential evolution is used for quantitative interpretation of self-potential data in geophysics. Six parameters are estimated including the electrical dipole moment, the depth of the source, the distance from the origin, the polarization angle and the regional coefficients. This study considers three kinds of data from Turkey: noise-free data, contaminated synthetic data, and Field example. The differential evolution and the corresponding model parameters are constructed as regards the number of the generations. Then, we show the vibration of the parameters at the vicinity of the low misfit area. Moreover, we show how the frequency distribution of each parameter is related to the number of the DE iteration. Experimental results show the DE can be used for solving the quantitative interpretation of self-potential data efficiently compared with previous methods.

  10. Application of Differential Evolution Algorithm on Self-Potential Data

    PubMed Central

    Li, Xiangtao; Yin, Minghao

    2012-01-01

    Differential evolution (DE) is a population based evolutionary algorithm widely used for solving multidimensional global optimization problems over continuous spaces, and has been successfully used to solve several kinds of problems. In this paper, differential evolution is used for quantitative interpretation of self-potential data in geophysics. Six parameters are estimated including the electrical dipole moment, the depth of the source, the distance from the origin, the polarization angle and the regional coefficients. This study considers three kinds of data from Turkey: noise-free data, contaminated synthetic data, and Field example. The differential evolution and the corresponding model parameters are constructed as regards the number of the generations. Then, we show the vibration of the parameters at the vicinity of the low misfit area. Moreover, we show how the frequency distribution of each parameter is related to the number of the DE iteration. Experimental results show the DE can be used for solving the quantitative interpretation of self-potential data efficiently compared with previous methods. PMID:23240004

  11. Image recombination transform algorithm for superresolution structured illumination microscopy

    PubMed Central

    Zhou, Xing; Lei, Ming; Dan, Dan; Yao, Baoli; Yang, Yanlong; Qian, Jia; Chen, Guangde; Bianco, Piero R.

    2016-01-01

    Abstract. Structured illumination microscopy (SIM) is an attractive choice for fast superresolution imaging. The generation of structured illumination patterns made by interference of laser beams is broadly employed to obtain high modulation depth of patterns, while the polarizations of the laser beams must be elaborately controlled to guarantee the high contrast of interference intensity, which brings a more complex configuration for the polarization control. The emerging pattern projection strategy is much more compact, but the modulation depth of patterns is deteriorated by the optical transfer function of the optical system, especially in high spatial frequency near the diffraction limit. Therefore, the traditional superresolution reconstruction algorithm for interference-based SIM will suffer from many artifacts in the case of projection-based SIM that possesses a low modulation depth. Here, we propose an alternative reconstruction algorithm based on image recombination transform, which provides an alternative solution to address this problem even in a weak modulation depth. We demonstrated the effectiveness of this algorithm in the multicolor superresolution imaging of bovine pulmonary arterial endothelial cells in our developed projection-based SIM system, which applies a computer controlled digital micromirror device for fast fringe generation and multicolor light-emitting diodes for illumination. The merit of the system incorporated with the proposed algorithm allows for a low excitation intensity fluorescence imaging even less than 1  W/cm2, which is beneficial for the long-term, in vivo superresolved imaging of live cells and tissues. PMID:27653935

  12. Fringe pattern demodulation using the one-dimensional continuous wavelet transform: field-programmable gate array implementation.

    PubMed

    Abid, Abdulbasit

    2013-03-01

    This paper presents a thorough discussion of the proposed field-programmable gate array (FPGA) implementation for fringe pattern demodulation using the one-dimensional continuous wavelet transform (1D-CWT) algorithm. This algorithm is also known as wavelet transform profilometry. Initially, the 1D-CWT is programmed using the C programming language and compiled into VHDL using the ImpulseC tool. This VHDL code is implemented on the Altera Cyclone IV GX EP4CGX150DF31C7 FPGA. A fringe pattern image with a size of 512×512 pixels is presented to the FPGA, which processes the image using the 1D-CWT algorithm. The FPGA requires approximately 100 ms to process the image and produce a wrapped phase map. For performance comparison purposes, the 1D-CWT algorithm is programmed using the C language. The C code is then compiled using the Intel compiler version 13.0. The compiled code is run on a Dell Precision state-of-the-art workstation. The time required to process the fringe pattern image is approximately 1 s. In order to further reduce the execution time, the 1D-CWT is reprogramed using Intel Integrated Primitive Performance (IPP) Library Version 7.1. The execution time was reduced to approximately 650 ms. This confirms that at least sixfold speedup was gained using FPGA implementation over a state-of-the-art workstation that executes heavily optimized implementation of the 1D-CWT algorithm.

  13. Recursive least-squares learning algorithms for neural networks

    NASA Astrophysics Data System (ADS)

    Lewis, Paul S.; Hwang, Jenq N.

    1990-11-01

    This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].

  14. Recent Development of Multigrid Algorithms for Mixed and Noncomforming Methods for Second Order Elliptical Problems

    NASA Technical Reports Server (NTRS)

    Chen, Zhangxin; Ewing, Richard E.

    1996-01-01

    Multigrid algorithms for nonconforming and mixed finite element methods for second order elliptic problems on triangular and rectangular finite elements are considered. The construction of several coarse-to-fine intergrid transfer operators for nonconforming multigrid algorithms is discussed. The equivalence between the nonconforming and mixed finite element methods with and without projection of the coefficient of the differential problems into finite element spaces is described.

  15. Transcriptome analysis of the painted lady butterfly, Vanessa cardui during wing color pattern development.

    PubMed

    Connahs, Heidi; Rhen, Turk; Simmons, Rebecca B

    2016-03-31

    Butterfly wing color patterns are an important model system for understanding the evolution and development of morphological diversity and animal pigmentation. Wing color patterns develop from a complex network composed of highly conserved patterning genes and pigmentation pathways. Patterning genes are involved in regulating pigment synthesis however the temporal expression dynamics of these interacting networks is poorly understood. Here, we employ next generation sequencing to examine expression patterns of the gene network underlying wing development in the nymphalid butterfly, Vanessa cardui. We identified 9, 376 differentially expressed transcripts during wing color pattern development, including genes involved in patterning, pigmentation and gene regulation. Differential expression of these genes was highest at the pre-ommochrome stage compared to early pupal and late melanin stages. Overall, an increasing number of genes were down-regulated during the progression of wing development. We observed dynamic expression patterns of a large number of pigment genes from the ommochrome, melanin and also pteridine pathways, including contrasting patterns of expression for paralogs of the yellow gene family. Surprisingly, many patterning genes previously associated with butterfly pattern elements were not significantly up-regulated at any time during pupation, although many other transcription factors were differentially expressed. Several genes involved in Notch signaling were significantly up-regulated during the pre-ommochrome stage including slow border cells, bunched and pebbles; the function of these genes in the development of butterfly wings is currently unknown. Many genes involved in ecdysone signaling were also significantly up-regulated during early pupal and late melanin stages and exhibited opposing patterns of expression relative to the ecdysone receptor. Finally, a comparison across four butterfly transcriptomes revealed 28 transcripts common to all four species that have no known homologs in other metazoans. This study provides a comprehensive list of differentially expressed transcripts during wing development, revealing potential candidate genes that may be involved in regulating butterfly wing patterns. Some differentially expressed genes have no known homologs possibly representing genes unique to butterflies. Results from this study also indicate that development of nymphalid wing patterns may arise not only from melanin and ommochrome pigments but also the pteridine pigment pathway.

  16. On the systematic approach to the classification of differential equations by group theoretical methods

    NASA Astrophysics Data System (ADS)

    Andriopoulos, K.; Dimas, S.; Leach, P. G. L.; Tsoubelis, D.

    2009-08-01

    Complete symmetry groups enable one to characterise fully a given differential equation. By considering the reversal of an approach based upon complete symmetry groups we construct new classes of differential equations which have the equations of Bateman, Monge-Ampère and Born-Infeld as special cases. We develop a symbolic algorithm to decrease the complexity of the calculations involved.

  17. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  18. Utilizing Visual Effects Software for Efficient and Flexible Isostatic Adjustment Modelling

    NASA Astrophysics Data System (ADS)

    Meldgaard, A.; Nielsen, L.; Iaffaldano, G.

    2017-12-01

    The isostatic adjustment signal generated by transient ice sheet loading is an important indicator of past ice sheet extent and the rheological constitution of the interior of the Earth. Finite element modelling has proved to be a very useful tool in these studies. We present a simple numerical model for 3D visco elastic Earth deformation and a new approach to the design of such models utilizing visual effects software designed for the film and game industry. The software package Houdini offers an assortment of optimized tools and libraries which greatly facilitate the creation of efficient numerical algorithms. In particular, we make use of Houdini's procedural work flow, the SIMD programming language VEX, Houdini's sparse matrix creation and inversion libraries, an inbuilt tetrahedralizer for grid creation, and the user interface, which facilitates effortless manipulation of 3D geometry. We mitigate many of the time consuming steps associated with the authoring of efficient algorithms from scratch while still keeping the flexibility that may be lost with the use of commercial dedicated finite element programs. We test the efficiency of the algorithm by comparing simulation times with off-the-shelf solutions from the Abaqus software package. The algorithm is tailored for the study of local isostatic adjustment patterns, in close vicinity to present ice sheet margins. In particular, we wish to examine possible causes for the considerable spatial differences in the uplift magnitude which are apparent from field observations in these areas. Such features, with spatial scales of tens of kilometres, are not resolvable with current global isostatic adjustment models, and may require the inclusion of local topographic features. We use the presented algorithm to study a near field area where field observations are abundant, namely, Disko Bay in West Greenland with the intention of constraining Earth parameters and ice thickness. In addition, we assess how local topographic features may influence the differential isostatic uplift in the area.

  19. TH-E-BRE-05: Analysis of Dosimetric Characteristics in Two Leaf Motion Calculator Algorithms for Sliding Window IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, L; Huang, B; Rowedder, B

    Purpose: The Smart leaf motion calculator (SLMC) in Eclipse treatment planning system is an advanced fluence delivery modeling algorithm as it takes into account fine MLC features including inter-leaf leakage, rounded leaf tips, non-uniform leaf thickness, and the spindle cavity etc. In this study, SLMC and traditional Varian LMC (VLMC) algorithms were investigated, for the first time, in dosimetric characteristics and delivery accuracy of sliding window (SW) IMRT. Methods: The SW IMRT plans of 51 cancer cases were included to evaluate dosimetric characteristics and dose delivery accuracy from leaf motion calculated by SLMC and VLMC, respectively. All plans were deliveredmore » using a Varian TrueBeam Linac. The DVH and MUs of the plans were analyzed. Three patient specific QA tools - independent dose calculation software IMSure, Delta4 phantom, and EPID portal dosimetry were also used to measure the delivered dose distribution. Results: Significant differences in the MUs were observed between the two LMCs (p≤0.001).Gamma analysis shows an excellent agreement between the planned dose distribution calculated by both LMC algorithms and delivered dose distribution measured by three QA tools in all plans at 3%/3 mm, leading to a mean pass rate exceeding 97%. The mean fraction of pixels with gamma < 1 of SLMC is slightly lower than that of VLMC in the IMSure and Delta4 results, but higher in portal dosimetry (the highest spatial resolution), especially in complex cases such as nasopharynx. Conclusion: The study suggests that the two LMCs generates the similar target coverage and sparing patterns of critical structures. However, SLMC is modestly more accurate than VLMC in modeling advanced MLC features, which may lead to a more accurate dose delivery in SW IMRT. Current clinical QA tools might not be specific enough to differentiate the dosimetric discrepancies at the millimeter level calculated by these two LMC algorithms. NIH/NIGMS grant U54 GM104944, Lincy Endowed Assistant Professorship.« less

  20. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

Top