Sample records for programming cellular algorithms

  1. Cellular automata-based modelling and simulation of biofilm structure on multi-core computers.

    PubMed

    Skoneczny, Szymon

    2015-01-01

    The article presents a mathematical model of biofilm growth for aerobic biodegradation of a toxic carbonaceous substrate. Modelling of biofilm growth has fundamental significance in numerous processes of biotechnology and mathematical modelling of bioreactors. The process following double-substrate kinetics with substrate inhibition proceeding in a biofilm has not been modelled so far by means of cellular automata. Each process in the model proposed, i.e. diffusion of substrates, uptake of substrates, growth and decay of microorganisms and biofilm detachment, is simulated in a discrete manner. It was shown that for flat biofilm of constant thickness, the results of the presented model agree with those of a continuous model. The primary outcome of the study was to propose a mathematical model of biofilm growth; however a considerable amount of focus was also placed on the development of efficient algorithms for its solution. Two parallel algorithms were created, differing in the way computations are distributed. Computer programs were created using OpenMP Application Programming Interface for C++ programming language. Simulations of biofilm growth were performed on three high-performance computers. Speed-up coefficients of computer programs were compared. Both algorithms enabled a significant reduction of computation time. It is important, inter alia, in modelling and simulation of bioreactor dynamics.

  2. Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems.

    PubMed

    D'Onofrio, David J; Abel, David L; Johnson, Donald E

    2012-03-14

    The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called prescriptive information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

  3. Algorithm for repairing the damaged images of grain structures obtained from the cellular automata and measurement of grain size

    NASA Astrophysics Data System (ADS)

    Ramírez-López, A.; Romero-Romo, M. A.; Muñoz-Negron, D.; López-Ramírez, S.; Escarela-Pérez, R.; Duran-Valencia, C.

    2012-10-01

    Computational models are developed to create grain structures using mathematical algorithms based on the chaos theory such as cellular automaton, geometrical models, fractals, and stochastic methods. Because of the chaotic nature of grain structures, some of the most popular routines are based on the Monte Carlo method, statistical distributions, and random walk methods, which can be easily programmed and included in nested loops. Nevertheless, grain structures are not well defined as the results of computational errors and numerical inconsistencies on mathematical methods. Due to the finite definition of numbers or the numerical restrictions during the simulation of solidification, damaged images appear on the screen. These images must be repaired to obtain a good measurement of grain geometrical properties. Some mathematical algorithms were developed to repair, measure, and characterize grain structures obtained from cellular automata in the present work. An appropriate measurement of grain size and the corrected identification of interfaces and length are very important topics in materials science because they are the representation and validation of mathematical models with real samples. As a result, the developed algorithms are tested and proved to be appropriate and efficient to eliminate the errors and characterize the grain structures.

  4. Isolating specific cell and tissue compartments from 3D images for quantitative regional distribution analysis using novel computer algorithms.

    PubMed

    Fenrich, Keith K; Zhao, Ethan Y; Wei, Yuan; Garg, Anirudh; Rose, P Ken

    2014-04-15

    Isolating specific cellular and tissue compartments from 3D image stacks for quantitative distribution analysis is crucial for understanding cellular and tissue physiology under normal and pathological conditions. Current approaches are limited because they are designed to map the distributions of synapses onto the dendrites of stained neurons and/or require specific proprietary software packages for their implementation. To overcome these obstacles, we developed algorithms to Grow and Shrink Volumes of Interest (GSVI) to isolate specific cellular and tissue compartments from 3D image stacks for quantitative analysis and incorporated these algorithms into a user-friendly computer program that is open source and downloadable at no cost. The GSVI algorithm was used to isolate perivascular regions in the cortex of live animals and cell membrane regions of stained spinal motoneurons in histological sections. We tracked the real-time, intravital biodistribution of injected fluorophores with sub-cellular resolution from the vascular lumen to the perivascular and parenchymal space following a vascular microlesion, and mapped the precise distributions of membrane-associated KCC2 and gephyrin immunolabeling in dendritic and somatic regions of spinal motoneurons. Compared to existing approaches, the GSVI approach is specifically designed for isolating perivascular regions and membrane-associated regions for quantitative analysis, is user-friendly, and free. The GSVI algorithm is useful to quantify regional differences of stained biomarkers (e.g., cell membrane-associated channels) in relation to cell functions, and the effects of therapeutic strategies on the redistributions of biomolecules, drugs, and cells in diseased or injured tissues. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Path planning on cellular nonlinear network using active wave computing technique

    NASA Astrophysics Data System (ADS)

    Yeniçeri, Ramazan; Yalçın, Müstak E.

    2009-05-01

    This paper introduces a simple algorithm to solve robot path finding problem using active wave computing techniques. A two-dimensional Cellular Neural/Nonlinear Network (CNN), consist of relaxation oscillators, has been used to generate active waves and to process the visual information. The network, which has been implemented on a Field Programmable Gate Array (FPGA) chip, has the feature of being programmed, controlled and observed by a host computer. The arena of the robot is modelled as the medium of the active waves on the network. Active waves are employed to cover the whole medium with their own dynamics, by starting from an initial point. The proposed algorithm is achieved by observing the motion of the wave-front of the active waves. Host program first loads the arena model onto the active wave generator network and command to start the generation. Then periodically pulls the network image from the generator hardware to analyze evolution of the active waves. When the algorithm is completed, vectorial data image is generated. The path from any of the pixel on this image to the active wave generating pixel is drawn by the vectors on this image. The robot arena may be a complicated labyrinth or may have a simple geometry. But, the arena surface always must be flat. Our Autowave Generator CNN implementation which is settled on the Xilinx University Program Virtex-II Pro Development System is operated by a MATLAB program running on the host computer. As the active wave generator hardware has 16, 384 neurons, an arena with 128 × 128 pixels can be modeled and solved by the algorithm. The system also has a monitor and network image is depicted on the monitor simultaneously.

  6. Stimfit: quantifying electrophysiological data with Python

    PubMed Central

    Guzman, Segundo J.; Schlögl, Alois; Schmidt-Hieber, Christoph

    2013-01-01

    Intracellular electrophysiological recordings provide crucial insights into elementary neuronal signals such as action potentials and synaptic currents. Analyzing and interpreting these signals is essential for a quantitative understanding of neuronal information processing, and requires both fast data visualization and ready access to complex analysis routines. To achieve this goal, we have developed Stimfit, a free software package for cellular neurophysiology with a Python scripting interface and a built-in Python shell. The program supports most standard file formats for cellular neurophysiology and other biomedical signals through the Biosig library. To quantify and interpret the activity of single neurons and communication between neurons, the program includes algorithms to characterize the kinetics of presynaptic action potentials and postsynaptic currents, estimate latencies between pre- and postsynaptic events, and detect spontaneously occurring events. We validate and benchmark these algorithms, give estimation errors, and provide sample use cases, showing that Stimfit represents an efficient, accessible and extensible way to accurately analyze and interpret neuronal signals. PMID:24600389

  7. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    USGS Publications Warehouse

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  8. Modeling 2D and 3D diffusion.

    PubMed

    Saxton, Michael J

    2007-01-01

    Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.

  9. Single-cell topological RNA-Seq analysis reveals insights into cellular differentiation and development

    PubMed Central

    Rizvi, Abbas H.; Camara, Pablo G.; Kandror, Elena K.; Roberts, Thomas J.; Schieren, Ira; Maniatis, Tom; Rabadan, Raul

    2017-01-01

    Transcriptional programs control cellular lineage commitment and differentiation during development. Understanding cell fate has been advanced by studying single-cell RNA-seq, but is limited by the assumptions of current analytic methods regarding the structure of data. We present single-cell topological data analysis (scTDA), an algorithm for topology-based computational analyses to study temporal, unbiased transcriptional regulation. Compared to other methods, scTDA is a non-linear, model-independent, unsupervised statistical framework that can characterize transient cellular states. We applied scTDA to the analysis of murine embryonic stem cell (mESC) differentiation in vitro in response to inducers of motor neuron differentiation. scTDA resolved asynchrony and continuity in cellular identity over time, and identified four transient states (pluripotent, precursor, progenitor, and fully differentiated cells) based on changes in stage-dependent combinations of transcription factors, RNA-binding proteins and long non-coding RNAs. scTDA can be applied to study asynchronous cellular responses to either developmental cues or environmental perturbations. PMID:28459448

  10. An end-to-end workflow for engineering of biological networks from high-level specifications.

    PubMed

    Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun

    2012-08-17

    We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.

  11. Generic framework for mining cellular automata models on protein-folding simulations.

    PubMed

    Diaz, N; Tischer, I

    2016-05-13

    Cellular automata model identification is an important way of building simplified simulation models. In this study, we describe a generic architectural framework to ease the development process of new metaheuristic-based algorithms for cellular automata model identification in protein-folding trajectories. Our framework was developed by a methodology based on design patterns that allow an improved experience for new algorithms development. The usefulness of the proposed framework is demonstrated by the implementation of four algorithms, able to obtain extremely precise cellular automata models of the protein-folding process with a protein contact map representation. Dynamic rules obtained by the proposed approach are discussed, and future use for the new tool is outlined.

  12. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    PubMed Central

    Zhu, Hao; Sun, Yan; Rajagopal, Gunaretnam; Mondry, Adrian; Dhar, Pawan

    2004-01-01

    Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods described. PMID:15339335

  13. A novel image encryption algorithm using chaos and reversible cellular automata

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Luan, Dapeng

    2013-11-01

    In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.

  14. Non Linear Programming (NLP) Formulation for Quantitative Modeling of Protein Signal Transduction Pathways

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239

  15. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    PubMed

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  16. Application of cellular automatons and ant algorithms in avionics

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.; Selvesiuk, N. I.; Platoshin, G. A.; Semenova, E. V.

    2018-03-01

    The paper considers two algorithms for searching quasi-optimal solutions of discrete optimization problems with regard to the tasks of avionics placing. The first one solves the problem of optimal placement of devices by installation locations, the second one is for the problem of finding the shortest route between devices. Solutions are constructed using a cellular automaton and the ant colony algorithm.

  17. An efficient Cellular Potts Model algorithm that forbids cell fragmentation

    NASA Astrophysics Data System (ADS)

    Durand, Marc; Guesnet, Etienne

    2016-11-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.

  18. Three-dimensional labeling program for elucidation of the geometric properties of biological particles in three-dimensional space.

    PubMed

    Nomura, A; Yamazaki, Y; Tsuji, T; Kawasaki, Y; Tanaka, S

    1996-09-15

    For all biological particles such as cells or cellular organelles, there are three-dimensional coordinates representing the centroid or center of gravity. These coordinates and other numerical parameters such as volume, fluorescence intensity, surface area, and shape are referred to in this paper as geometric properties, which may provide critical information for the clarification of in situ mechanisms of molecular and cellular functions in living organisms. We have established a method for the elucidation of these properties, designated the three-dimensional labeling program (3DLP). Algorithms of 3DLP are so simple that this method can be carried out through the use of software combinations in image analysis on a personal computer. To evaluate 3DLP, it was applied to a 32-cell-stage sea urchin embryo, double stained with FITC for cellular protein of blastomeres and propidium iodide for nuclear DNA. A stack of optical serial section images was obtained by confocal laser scanning microscopy. The method was found effective for determining geometric properties and should prove applicable to the study of many different kinds of biological particles in three-dimensional space.

  19. Mechanobiological simulations of peri-acetabular bone ingrowth: a comparative analysis of cell-phenotype specific and phenomenological algorithms.

    PubMed

    Mukherjee, Kaushik; Gupta, Sanjay

    2017-03-01

    Several mechanobiology algorithms have been employed to simulate bone ingrowth around porous coated implants. However, there is a scarcity of quantitative comparison between the efficacies of commonly used mechanoregulatory algorithms. The objectives of this study are: (1) to predict peri-acetabular bone ingrowth using cell-phenotype specific algorithm and to compare these predictions with those obtained using phenomenological algorithm and (2) to investigate the influences of cellular parameters on bone ingrowth. The variation in host bone material property and interfacial micromotion of the implanted pelvis were mapped onto the microscale model of implant-bone interface. An overall variation of 17-88 % in peri-acetabular bone ingrowth was observed. Despite differences in predicted tissue differentiation patterns during the initial period, both the algorithms predicted similar spatial distribution of neo-tissue layer, after attainment of equilibrium. Results indicated that phenomenological algorithm, being computationally faster than the cell-phenotype specific algorithm, might be used to predict peri-prosthetic bone ingrowth. The cell-phenotype specific algorithm, however, was found to be useful in numerically investigating the influence of alterations in cellular activities on bone ingrowth, owing to biologically related factors. Amongst the host of cellular activities, matrix production rate of bone tissue was found to have predominant influence on peri-acetabular bone ingrowth.

  20. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    PubMed Central

    2010-01-01

    Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU) opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU) code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a starting point for modelers to develop their own GPU implementations, and encourage others to implement their modeling methods on the GPU and to make that code available to the wider community. PMID:20696053

  1. A firefly algorithm for optimum design of new-generation beams

    NASA Astrophysics Data System (ADS)

    Erdal, F.

    2017-06-01

    This research addresses the minimum weight design of new-generation steel beams with sinusoidal openings using a metaheuristic search technique, namely the firefly method. The proposed algorithm is also used to compare the optimum design results of sinusoidal web-expanded beams with steel castellated and cellular beams. Optimum design problems of all beams are formulated according to the design limitations stipulated by the Steel Construction Institute. The design methods adopted in these publications are consistent with BS 5950 specifications. The formulation of the design problem considering the above-mentioned limitations turns out to be a discrete programming problem. The design algorithms based on the technique select the optimum universal beam sections, dimensional properties of sinusoidal, hexagonal and circular holes, and the total number of openings along the beam as design variables. Furthermore, this selection is also carried out such that the behavioural limitations are satisfied. Numerical examples are presented, where the suggested algorithm is implemented to achieve the minimum weight design of these beams subjected to loading combinations.

  2. Automated cellular pathology in noninvasive confocal microscopy

    NASA Astrophysics Data System (ADS)

    Ting, Monica; Krueger, James; Gareau, Daniel

    2014-03-01

    A computer algorithm was developed to automatically identify and count melanocytes and keratinocytes in 3D reflectance confocal microscopy (RCM) images of the skin. Computerized pathology increases our understanding and enables prevention of superficial spreading melanoma (SSM). Machine learning involved looking at the images to measure the size of cells through a 2-D Fourier transform and developing an appropriate mask with the erf() function to model the cells. Implementation involved processing the images to identify cells whose image segments provided the least difference when subtracted from the mask. With further simplification of the algorithm, the program may be directly implemented on the RCM images to indicate the presence of keratinocytes in seconds and to quantify the keratinocytes size in the en face plane as a function of depth. Using this system, the algorithm can identify any irregularities in maturation and differentiation of keratinocytes, thereby signaling the possible presence of cancer.

  3. Optimizing Cellular Networks Enabled with Renewal Energy via Strategic Learning.

    PubMed

    Sohn, Insoo; Liu, Huaping; Ansari, Nirwan

    2015-01-01

    An important issue in the cellular industry is the rising energy cost and carbon footprint due to the rapid expansion of the cellular infrastructure. Greening cellular networks has thus attracted attention. Among the promising green cellular network techniques, the renewable energy-powered cellular network has drawn increasing attention as a critical element towards reducing carbon emissions due to massive energy consumption in the base stations deployed in cellular networks. Game theory is a branch of mathematics that is used to evaluate and optimize systems with multiple players with conflicting objectives and has been successfully used to solve various problems in cellular networks. In this paper, we model the green energy utilization and power consumption optimization problem of a green cellular network as a pilot power selection strategic game and propose a novel distributed algorithm based on a strategic learning method. The simulation results indicate that the proposed algorithm achieves correlated equilibrium of the pilot power selection game, resulting in optimum green energy utilization and power consumption reduction.

  4. West Virginia US Department of Energy experimental program to stimulate competitive research. Section 2: Human resource development; Section 3: Carbon-based structural materials research cluster; Section 3: Data parallel algorithms for scientific computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-02-02

    This report consists of three separate but related reports. They are (1) Human Resource Development, (2) Carbon-based Structural Materials Research Cluster, and (3) Data Parallel Algorithms for Scientific Computing. To meet the objectives of the Human Resource Development plan, the plan includes K--12 enrichment activities, undergraduate research opportunities for students at the state`s two Historically Black Colleges and Universities, graduate research through cluster assistantships and through a traineeship program targeted specifically to minorities, women and the disabled, and faculty development through participation in research clusters. One research cluster is the chemistry and physics of carbon-based materials. The objective of thismore » cluster is to develop a self-sustaining group of researchers in carbon-based materials research within the institutions of higher education in the state of West Virginia. The projects will involve analysis of cokes, graphites and other carbons in order to understand the properties that provide desirable structural characteristics including resistance to oxidation, levels of anisotropy and structural characteristics of the carbons themselves. In the proposed cluster on parallel algorithms, research by four WVU faculty and three state liberal arts college faculty are: (1) modeling of self-organized critical systems by cellular automata; (2) multiprefix algorithms and fat-free embeddings; (3) offline and online partitioning of data computation; and (4) manipulating and rendering three dimensional objects. This cluster furthers the state Experimental Program to Stimulate Competitive Research plan by building on existing strengths at WVU in parallel algorithms.« less

  5. Digital sorting of complex tissues for cell type-specific gene expression profiles.

    PubMed

    Zhong, Yi; Wan, Ying-Wooi; Pang, Kaifang; Chow, Lionel M L; Liu, Zhandong

    2013-03-07

    Cellular heterogeneity is present in almost all gene expression profiles. However, transcriptome analysis of tissue specimens often ignores the cellular heterogeneity present in these samples. Standard deconvolution algorithms require prior knowledge of the cell type frequencies within a tissue or their in vitro expression profiles. Furthermore, these algorithms tend to report biased estimations. Here, we describe a Digital Sorting Algorithm (DSA) for extracting cell-type specific gene expression profiles from mixed tissue samples that is unbiased and does not require prior knowledge of cell type frequencies. The results suggest that DSA is a specific and sensitivity algorithm in gene expression profile deconvolution and will be useful in studying individual cell types of complex tissues.

  6. Large-scale parallel lattice Boltzmann-cellular automaton model of two-dimensional dendritic growth

    NASA Astrophysics Data System (ADS)

    Jelinek, Bohumir; Eshraghi, Mohsen; Felicelli, Sergio; Peters, John F.

    2014-03-01

    An extremely scalable lattice Boltzmann (LB)-cellular automaton (CA) model for simulations of two-dimensional (2D) dendritic solidification under forced convection is presented. The model incorporates effects of phase change, solute diffusion, melt convection, and heat transport. The LB model represents the diffusion, convection, and heat transfer phenomena. The dendrite growth is driven by a difference between actual and equilibrium liquid composition at the solid-liquid interface. The CA technique is deployed to track the new interface cells. The computer program was parallelized using the Message Passing Interface (MPI) technique. Parallel scaling of the algorithm was studied and major scalability bottlenecks were identified. Efficiency loss attributable to the high memory bandwidth requirement of the algorithm was observed when using multiple cores per processor. Parallel writing of the output variables of interest was implemented in the binary Hierarchical Data Format 5 (HDF5) to improve the output performance, and to simplify visualization. Calculations were carried out in single precision arithmetic without significant loss in accuracy, resulting in 50% reduction of memory and computational time requirements. The presented solidification model shows a very good scalability up to centimeter size domains, including more than ten million of dendrites. Catalogue identifier: AEQZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29,767 No. of bytes in distributed program, including test data, etc.: 3131,367 Distribution format: tar.gz Programming language: Fortran 90. Computer: Linux PC and clusters. Operating system: Linux. Has the code been vectorized or parallelized?: Yes. Program is parallelized using MPI. Number of processors used: 1-50,000 RAM: Memory requirements depend on the grid size Classification: 6.5, 7.7. External routines: MPI (http://www.mcs.anl.gov/research/projects/mpi/), HDF5 (http://www.hdfgroup.org/HDF5/) Nature of problem: Dendritic growth in undercooled Al-3 wt% Cu alloy melt under forced convection. Solution method: The lattice Boltzmann model solves the diffusion, convection, and heat transfer phenomena. The cellular automaton technique is deployed to track the solid/liquid interface. Restrictions: Heat transfer is calculated uncoupled from the fluid flow. Thermal diffusivity is constant. Unusual features: Novel technique, utilizing periodic duplication of a pre-grown “incubation” domain, is applied for the scaleup test. Running time: Running time varies from minutes to days depending on the domain size and number of computational cores.

  7. Gaussian Mean Field Lattice Gas

    NASA Astrophysics Data System (ADS)

    Scoppola, Benedetto; Troiani, Alessio

    2018-03-01

    We study rigorously a lattice gas version of the Sherrington-Kirckpatrick spin glass model. In discrete optimization literature this problem is known as unconstrained binary quadratic programming and it belongs to the class NP-hard. We prove that the fluctuations of the ground state energy tend to vanish in the thermodynamic limit, and we give a lower bound of such ground state energy. Then we present a heuristic algorithm, based on a probabilistic cellular automaton, which seems to be able to find configurations with energy very close to the minimum, even for quite large instances.

  8. Cellular Automata Ideas in Digital Circuits and Switching Theory.

    ERIC Educational Resources Information Center

    Siwak, Pawel P.

    1985-01-01

    Presents two examples which illustrate the usefulness of ideas from cellular automata. First, Lee's algorithm is recalled and its cellular nature shown. Then a problem from digraphs, which has arisen from analyzing predecessing configurations in the famous Conway's "game of life," is considered. (Author/JN)

  9. Computing aggregate properties of preimages for 2D cellular automata.

    PubMed

    Beer, Randall D

    2017-11-01

    Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm-incremental aggregation-that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.

  10. An image encryption algorithm based on 3D cellular automata and chaotic maps

    NASA Astrophysics Data System (ADS)

    Del Rey, A. Martín; Sánchez, G. Rodríguez

    2015-05-01

    A novel encryption algorithm to cipher digital images is presented in this work. The digital image is rendering into a three-dimensional (3D) lattice and the protocol consists of two phases: the confusion phase where 24 chaotic Cat maps are applied and the diffusion phase where a 3D cellular automata is evolved. The encryption method is shown to be secure against the most important cryptanalytic attacks.

  11. A Dynamic Programming Approach for Base Station Sleeping in Cellular Networks

    NASA Astrophysics Data System (ADS)

    Gong, Jie; Zhou, Sheng; Niu, Zhisheng

    The energy consumption of the information and communication technology (ICT) industry, which has become a serious problem, is mostly due to the network infrastructure rather than the mobile terminals. In this paper, we focus on reducing the energy consumption of base stations (BSs) by adjusting their working modes (active or sleep). Specifically, the objective is to minimize the energy consumption while satisfying quality of service (QoS, e.g., blocking probability) requirement and, at the same time, avoiding frequent mode switching to reduce signaling and delay overhead. The problem is modeled as a dynamic programming (DP) problem, which is NP-hard in general. Based on cooperation among neighboring BSs, a low-complexity algorithm is proposed to reduce the size of state space as well as that of action space. Simulations demonstrate that, with the proposed algorithm, the active BS pattern well meets the time variation and the non-uniform spatial distribution of system traffic. Moreover, the tradeoff between the energy saving from BS sleeping and the cost of switching is well balanced by the proposed scheme.

  12. Integrating GIS, cellular automata, and genetic algorithm in urban spatial optimization: a case study of Lanzhou

    NASA Astrophysics Data System (ADS)

    Xu, Xibao; Zhang, Jianming; Zhou, Xiaojian

    2006-10-01

    This paper presents a model integrating GIS, cellular automata (CA) and genetic algorithm (GA) in urban spatial optimization. The model involves three objectives of the maximization of land-use efficiency, the maximization of urban spatial harmony and appropriate proportion of each land-use type. CA submodel is designed with standard Moore neighbor and three transition rules to maximize the land-use efficiency and urban spatial harmony, according to the land-use suitability and spatial harmony index. GA submodel is designed with four constraints and seven steps for the maximization of urban spatial harmony and appropriate proportion of each land-use type, including encoding, initializing, calculating fitness, selection, crossover, mutation and elitism. GIS is used to prepare for the input data sets for the model and perform spatial analysis on the results, while CA and GA are integrated to optimize urban spatial structure, programmed with Matlab 7 and coupled with GIS loosely. Lanzhou, a typical valley-basin city with fast urban development, is chosen as the case study. At the end, a detail analysis and evaluation of the spatial optimization with the model are made, and it proves to be a powerful tool in optimizing urban spatial structure and make supplement for urban planning and policy-makers.

  13. A cellular automata based FPGA realization of a new metaheuristic bat-inspired algorithm

    NASA Astrophysics Data System (ADS)

    Progias, Pavlos; Amanatiadis, Angelos A.; Spataro, William; Trunfio, Giuseppe A.; Sirakoulis, Georgios Ch.

    2016-10-01

    Optimization algorithms are often inspired by processes occuring in nature, such as animal behavioral patterns. The main concern with implementing such algorithms in software is the large amounts of processing power they require. In contrast to software code, that can only perform calculations in a serial manner, an implementation in hardware, exploiting the inherent parallelism of single-purpose processors, can prove to be much more efficient both in speed and energy consumption. Furthermore, the use of Cellular Automata (CA) in such an implementation would be efficient both as a model for natural processes, as well as a computational paradigm implemented well on hardware. In this paper, we propose a VHDL implementation of a metaheuristic algorithm inspired by the echolocation behavior of bats. More specifically, the CA model is inspired by the metaheuristic algorithm proposed earlier in the literature, which could be considered at least as efficient than other existing optimization algorithms. The function of the FPGA implementation of our algorithm is explained in full detail and results of our simulations are also demonstrated.

  14. Cell Motility Dynamics: A Novel Segmentation Algorithm to Quantify Multi-Cellular Bright Field Microscopy Images

    PubMed Central

    Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan

    2011-01-01

    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications. PMID:22096600

  15. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    PubMed

    Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan

    2011-01-01

    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications.

  16. Comparison of neural network applications for channel assignment in cellular TDMA networks and dynamically sectored PCS networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1997-04-01

    The use of artificial neural networks (NNs) to address the channel assignment problem (CAP) for cellular time-division multiple access and code-division multiple access networks has previously been investigated by this author and many others. The investigations to date have been based on a hexagonal cell structure established by omnidirectional antennas at the base stations. No account was taken of the use of spatial isolation enabled by directional antennas to reduce interference between mobiles. Any reduction in interference translates into increased capacity and consequently alters the performance of the NNs. Previous studies have sought to improve the performance of Hopfield- Tank network algorithms and self-organizing feature map algorithms applied primarily to static channel assignment (SCA) for cellular networks that handle uniformly distributed, stationary traffic in each cell for a single type of service. The resulting algorithms minimize energy functions representing interference constraint and ad hoc conditions that promote convergence to optimal solutions. While the structures of the derived neural network algorithms (NNAs) offer the potential advantages of inherent parallelism and adaptability to changing system conditions, this potential has yet to be fulfilled the CAP for emerging mobile networks. The next-generation communication infrastructures must accommodate dynamic operating conditions. Macrocell topologies are being refined to microcells and picocells that can be dynamically sectored by adaptively controlled, directional antennas and programmable transceivers. These networks must support the time-varying demands for personal communication services (PCS) that simultaneously carry voice, data and video and, thus, require new dynamic channel assignment (DCA) algorithms. This paper examines the impact of dynamic cell sectoring and geometric conditioning on NNAs developed for SCA in omnicell networks with stationary traffic to improve the metrics of convergence rate and call blocking. Genetic algorithms (GAs) are also considered in PCS networks as a means to overcome the known weakness of Hopfield NNAs in determining global minima. The resulting GAs for DCA in PCS networks are compared to improved DCA algorithms based on Hopfield NNs for stationary cellular networks. Algorithm performance is compared on the basis of rate of convergence, blocking probability, analytic complexity, and parametric sensitivity to transient traffic demands and channel interference.

  17. UltraPse: A Universal and Extensible Software Platform for Representing Biological Sequences.

    PubMed

    Du, Pu-Feng; Zhao, Wei; Miao, Yang-Yang; Wei, Le-Yi; Wang, Likun

    2017-11-14

    With the avalanche of biological sequences in public databases, one of the most challenging problems in computational biology is to predict their biological functions and cellular attributes. Most of the existing prediction algorithms can only handle fixed-length numerical vectors. Therefore, it is important to be able to represent biological sequences with various lengths using fixed-length numerical vectors. Although several algorithms, as well as software implementations, have been developed to address this problem, these existing programs can only provide a fixed number of representation modes. Every time a new sequence representation mode is developed, a new program will be needed. In this paper, we propose the UltraPse as a universal software platform for this problem. The function of the UltraPse is not only to generate various existing sequence representation modes, but also to simplify all future programming works in developing novel representation modes. The extensibility of UltraPse is particularly enhanced. It allows the users to define their own representation mode, their own physicochemical properties, or even their own types of biological sequences. Moreover, UltraPse is also the fastest software of its kind. The source code package, as well as the executables for both Linux and Windows platforms, can be downloaded from the GitHub repository.

  18. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    PubMed

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  19. Condition monitoring of 3G cellular networks through competitive neural models.

    PubMed

    Barreto, Guilherme A; Mota, João C M; Souza, Luis G M; Frota, Rewbenio A; Aguayo, Leonardo

    2005-09-01

    We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.

  20. A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Jolai, Fariborz; Assadipour, Ghazal

    Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.

  1. A density distribution algorithm for bone incorporating local orthotropy, modal analysis and theories of cellular solids.

    PubMed

    Impelluso, Thomas J

    2003-06-01

    An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.

  2. CNNEDGEPOT: CNN based edge detection of 2D near surface potential field data

    NASA Astrophysics Data System (ADS)

    Aydogan, D.

    2012-09-01

    All anomalies are important in the interpretation of gravity and magnetic data because they indicate some important structural features. One of the advantages of using gravity or magnetic data for searching contacts is to be detected buried structures whose signs could not be seen on the surface. In this paper, a general view of the cellular neural network (CNN) method with a large scale nonlinear circuit is presented focusing on its image processing applications. The proposed CNN model is used consecutively in order to extract body and body edges. The algorithm is a stochastic image processing method based on close neighborhood relationship of the cells and optimization of A, B and I matrices entitled as cloning template operators. Setting up a CNN (continues time cellular neural network (CTCNN) or discrete time cellular neural network (DTCNN)) for a particular task needs a proper selection of cloning templates which determine the dynamics of the method. The proposed algorithm is used for image enhancement and edge detection. The proposed method is applied on synthetic and field data generated for edge detection of near-surface geological bodies that mask each other in various depths and dimensions. The program named as CNNEDGEPOT is a set of functions written in MATLAB software. The GUI helps the user to easily change all the required CNN model parameters. A visual evaluation of the outputs due to DTCNN and CTCNN are carried out and the results are compared with each other. These examples demonstrate that in detecting the geological features the CNN model can be used for visual interpretation of near surface gravity or magnetic anomaly maps.

  3. A linear programming approach to reconstructing subcellular structures from confocal images for automated generation of representative 3D cellular models.

    PubMed

    Wood, Scott T; Dean, Brian C; Dean, Delphine

    2013-04-01

    This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. PREMER: a Tool to Infer Biological Networks.

    PubMed

    Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R

    2017-10-04

    Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).

  5. Solving a mathematical model integrating unequal-area facilities layout and part scheduling in a cellular manufacturing system by a genetic algorithm.

    PubMed

    Ebrahimi, Ahmad; Kia, Reza; Komijan, Alireza Rashidi

    2016-01-01

    In this article, a novel integrated mixed-integer nonlinear programming model is presented for designing a cellular manufacturing system (CMS) considering machine layout and part scheduling problems simultaneously as interrelated decisions. The integrated CMS model is formulated to incorporate several design features including part due date, material handling time, operation sequence, processing time, an intra-cell layout of unequal-area facilities, and part scheduling. The objective function is to minimize makespan, tardiness penalties, and material handling costs of inter-cell and intra-cell movements. Two numerical examples are solved by the Lingo software to illustrate the results obtained by the incorporated features. In order to assess the effects and importance of integration of machine layout and part scheduling in designing a CMS, two approaches, sequentially and concurrent are investigated and the improvement resulted from a concurrent approach is revealed. Also, due to the NP-hardness of the integrated model, an efficient genetic algorithm is designed. As a consequence, computational results of this study indicate that the best solutions found by GA are better than the solutions found by B&B in much less time for both sequential and concurrent approaches. Moreover, the comparisons between the objective function values (OFVs) obtained by sequential and concurrent approaches demonstrate that the OFV improvement is averagely around 17 % by GA and 14 % by B&B.

  6. MapReduce Algorithms for Inferring Gene Regulatory Networks from Time-Series Microarray Data Using an Information-Theoretic Approach.

    PubMed

    Abduallah, Yasser; Turki, Turki; Byron, Kevin; Du, Zongxuan; Cervantes-Cervantes, Miguel; Wang, Jason T L

    2017-01-01

    Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool.

  7. Resource Allocation Algorithms for the Next Generation Cellular Networks

    NASA Astrophysics Data System (ADS)

    Amzallag, David; Raz, Danny

    This chapter describes recent results addressing resource allocation problems in the context of current and future cellular technologies. We present models that capture several fundamental aspects of planning and operating these networks, and develop new approximation algorithms providing provable good solutions for the corresponding optimization problems. We mainly focus on two families of problems: cell planning and cell selection. Cell planning deals with choosing a network of base stations that can provide the required coverage of the service area with respect to the traffic requirements, available capacities, interference, and the desired QoS. Cell selection is the process of determining the cell(s) that provide service to each mobile station. Optimizing these processes is an important step towards maximizing the utilization of current and future cellular networks.

  8. A new JPEG-based steganographic algorithm for mobile devices

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.

    2006-05-01

    Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.

  9. A Graphical User Interface for Software-assisted Tracking of Protein Concentration in Dynamic Cellular Protrusions.

    PubMed

    Saha, Tanumoy; Rathmann, Isabel; Galic, Milos

    2017-07-11

    Filopodia are dynamic, finger-like cellular protrusions associated with migration and cell-cell communication. In order to better understand the complex signaling mechanisms underlying filopodial initiation, elongation and subsequent stabilization or retraction, it is crucial to determine the spatio-temporal protein activity in these dynamic structures. To analyze protein function in filopodia, we recently developed a semi-automated tracking algorithm that adapts to filopodial shape-changes, thus allowing parallel analysis of protrusion dynamics and relative protein concentration along the whole filopodial length. Here, we present a detailed step-by-step protocol for optimized cell handling, image acquisition and software analysis. We further provide instructions for the use of optional features during image analysis and data representation, as well as troubleshooting guidelines for all critical steps along the way. Finally, we also include a comparison of the described image analysis software with other programs available for filopodia quantification. Together, the presented protocol provides a framework for accurate analysis of protein dynamics in filopodial protrusions using image analysis software.

  10. Modeling Reality - How Computers Mirror Life

    NASA Astrophysics Data System (ADS)

    Bialynicki-Birula, Iwo; Bialynicka-Birula, Iwona

    2005-01-01

    The bookModeling Reality covers a wide range of fascinating subjects, accessible to anyone who wants to learn about the use of computer modeling to solve a diverse range of problems, but who does not possess a specialized training in mathematics or computer science. The material presented is pitched at the level of high-school graduates, even though it covers some advanced topics (cellular automata, Shannon's measure of information, deterministic chaos, fractals, game theory, neural networks, genetic algorithms, and Turing machines). These advanced topics are explained in terms of well known simple concepts: Cellular automata - Game of Life, Shannon's formula - Game of twenty questions, Game theory - Television quiz, etc. The book is unique in explaining in a straightforward, yet complete, fashion many important ideas, related to various models of reality and their applications. Twenty-five programs, written especially for this book, are provided on an accompanying CD. They greatly enhance its pedagogical value and make learning of even the more complex topics an enjoyable pleasure.

  11. Evolution of cellular automata with memory: The Density Classification Task.

    PubMed

    Stone, Christopher; Bull, Larry

    2009-08-01

    The Density Classification Task is a well known test problem for two-state discrete dynamical systems. For many years researchers have used a variety of evolutionary computation approaches to evolve solutions to this problem. In this paper, we investigate the evolvability of solutions when the underlying Cellular Automaton is augmented with a type of memory based on the Least Mean Square algorithm. To obtain high performance solutions using a simple non-hybrid genetic algorithm, we design a novel representation based on the ternary representation used for Learning Classifier Systems. The new representation is found able to produce superior performance to the bit string traditionally used for representing Cellular automata. Moreover, memory is shown to improve evolvability of solutions and appropriate memory settings are able to be evolved as a component part of these solutions.

  12. Analysis of copy number variants by three detection algorithms and their association with body size in horses.

    PubMed

    Metzger, Julia; Philipp, Ute; Lopes, Maria Susana; da Camara Machado, Artur; Felicetti, Michela; Silvestrelli, Maurizio; Distl, Ottmar

    2013-07-18

    Copy number variants (CNVs) have been shown to play an important role in genetic diversity of mammals and in the development of many complex phenotypic traits. The aim of this study was to perform a standard comparative evaluation of CNVs in horses using three different CNV detection programs and to identify genomic regions associated with body size in horses. Analysis was performed using the Illumina Equine SNP50 genotyping beadchip for 854 horses. CNVs were detected by three different algorithms, CNVPartition, PennCNV and QuantiSNP. Comparative analysis revealed 50 CNVs that affected 153 different genes mainly involved in sensory perception, signal transduction and cellular components. Genome-wide association analysis for body size showed highly significant deleted regions on ECA1, ECA8 and ECA9. Homologous regions to the detected CNVs on ECA1 and ECA9 have also been shown to be correlated with human height. Comparative analysis of CNV detection algorithms was useful to increase the specificity of CNV detection but had certain limitations dependent on the detection tool. GWAS revealed genome-wide associated CNVs for body size in horses.

  13. Accurate Construction of Photoactivated Localization Microscopy (PALM) Images for Quantitative Measurements

    PubMed Central

    Coltharp, Carla; Kessler, Rene P.; Xiao, Jie

    2012-01-01

    Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and allows a variety of quantitative measurements tailored to specific needs of different biological systems. PMID:23251611

  14. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    PubMed Central

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  15. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  16. A Quantitative Three-Dimensional Image Analysis Tool for Maximal Acquisition of Spatial Heterogeneity Data.

    PubMed

    Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2017-02-01

    Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.

  17. A novel spatter detection algorithm based on typical cellular neural network operations for laser beam welding processes

    NASA Astrophysics Data System (ADS)

    Nicolosi, L.; Abt, F.; Blug, A.; Heider, A.; Tetzlaff, R.; Höfler, H.

    2012-01-01

    Real-time monitoring of laser beam welding (LBW) has increasingly gained importance in several manufacturing processes ranging from automobile production to precision mechanics. In the latter, a novel algorithm for the real-time detection of spatters was implemented in a camera based on cellular neural networks. The latter can be connected to the optics of commercially available laser machines leading to real-time monitoring of LBW processes at rates up to 15 kHz. Such high monitoring rates allow the integration of other image evaluation tasks such as the detection of the full penetration hole for real-time control of process parameters.

  18. Diametrical clustering for identifying anti-correlated gene clusters.

    PubMed

    Dhillon, Inderjit S; Marcotte, Edward M; Roshan, Usman

    2003-09-01

    Clustering genes based upon their expression patterns allows us to predict gene function. Most existing clustering algorithms cluster genes together when their expression patterns show high positive correlation. However, it has been observed that genes whose expression patterns are strongly anti-correlated can also be functionally similar. Biologically, this is not unintuitive-genes responding to the same stimuli, regardless of the nature of the response, are more likely to operate in the same pathways. We present a new diametrical clustering algorithm that explicitly identifies anti-correlated clusters of genes. Our algorithm proceeds by iteratively (i). re-partitioning the genes and (ii). computing the dominant singular vector of each gene cluster; each singular vector serving as the prototype of a 'diametric' cluster. We empirically show the effectiveness of the algorithm in identifying diametrical or anti-correlated clusters. Testing the algorithm on yeast cell cycle data, fibroblast gene expression data, and DNA microarray data from yeast mutants reveals that opposed cellular pathways can be discovered with this method. We present systems whose mRNA expression patterns, and likely their functions, oppose the yeast ribosome and proteosome, along with evidence for the inverse transcriptional regulation of a number of cellular systems.

  19. An Algorithm to Automate Yeast Segmentation and Tracking

    PubMed Central

    Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.

    2013-01-01

    Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484

  20. The spectral positioning algorithm of new spectrum vehicle based on convex programming in wireless sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjun; Lu, Zhixin

    2017-10-01

    Spectrum resources are very precious, so it is increasingly important to locate interference signals rapidly. Convex programming algorithms in wireless sensor networks are often used as localization algorithms. But in view of the traditional convex programming algorithm is too much overlap of wireless sensor nodes that bring low positioning accuracy, the paper proposed a new algorithm. Which is mainly based on the traditional convex programming algorithm, the spectrum car sends unmanned aerial vehicles (uses) that can be used to record data periodically along different trajectories. According to the probability density distribution, the positioning area is segmented to further reduce the location area. Because the algorithm only increases the communication process of the power value of the unknown node and the sensor node, the advantages of the convex programming algorithm are basically preserved to realize the simple and real-time performance. The experimental results show that the improved algorithm has a better positioning accuracy than the original convex programming algorithm.

  1. Automatic detection and measurement of viral replication compartments by ellipse adjustment

    PubMed Central

    Garcés, Yasel; Guerrero, Adán; Hidalgo, Paloma; López, Raul Eduardo; Wood, Christopher D.; Gonzalez, Ramón A.; Rendón-Mancha, Juan Manuel

    2016-01-01

    Viruses employ a variety of strategies to hijack cellular activities through the orchestrated recruitment of macromolecules to specific virus-induced cellular micro-environments. Adenoviruses (Ad) and other DNA viruses induce extensive reorganization of the cell nucleus and formation of nuclear Replication Compartments (RCs), where the viral genome is replicated and expressed. In this work an automatic algorithm designed for detection and segmentation of RCs using ellipses is presented. Unlike algorithms available in the literature, this approach is deterministic, automatic, and can adjust multiple RCs using ellipses. The proposed algorithm is non iterative, computationally efficient and is invariant to affine transformations. The method was validated over both synthetic images and more than 400 real images of Ad-infected cells at various timepoints of the viral replication cycle obtaining relevant information about the biogenesis of adenoviral RCs. As proof of concept the algorithm was then used to quantitatively compare RCs in cells infected with the adenovirus wild type or an adenovirus mutant that is null for expression of a viral protein that is known to affect activities associated with RCs that result in deficient viral progeny production. PMID:27819325

  2. Automatic detection and measurement of viral replication compartments by ellipse adjustment

    NASA Astrophysics Data System (ADS)

    Garcés, Yasel; Guerrero, Adán; Hidalgo, Paloma; López, Raul Eduardo; Wood, Christopher D.; Gonzalez, Ramón A.; Rendón-Mancha, Juan Manuel

    2016-11-01

    Viruses employ a variety of strategies to hijack cellular activities through the orchestrated recruitment of macromolecules to specific virus-induced cellular micro-environments. Adenoviruses (Ad) and other DNA viruses induce extensive reorganization of the cell nucleus and formation of nuclear Replication Compartments (RCs), where the viral genome is replicated and expressed. In this work an automatic algorithm designed for detection and segmentation of RCs using ellipses is presented. Unlike algorithms available in the literature, this approach is deterministic, automatic, and can adjust multiple RCs using ellipses. The proposed algorithm is non iterative, computationally efficient and is invariant to affine transformations. The method was validated over both synthetic images and more than 400 real images of Ad-infected cells at various timepoints of the viral replication cycle obtaining relevant information about the biogenesis of adenoviral RCs. As proof of concept the algorithm was then used to quantitatively compare RCs in cells infected with the adenovirus wild type or an adenovirus mutant that is null for expression of a viral protein that is known to affect activities associated with RCs that result in deficient viral progeny production.

  3. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  4. Reverse engineering and analysis of large genome-scale gene networks

    PubMed Central

    Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas

    2013-01-01

    Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249

  5. Error-free holographic frames encryption with CA pixel-permutation encoding algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiaowei; Xiao, Dan; Wang, Qiong-Hua

    2018-01-01

    The security of video data is necessary in network security transmission hence cryptography is technique to make video data secure and unreadable to unauthorized users. In this paper, we propose a holographic frames encryption technique based on the cellular automata (CA) pixel-permutation encoding algorithm. The concise pixel-permutation algorithm is used to address the drawbacks of the traditional CA encoding methods. The effectiveness of the proposed video encoding method is demonstrated by simulation examples.

  6. Molecular ping-pong Game of Life on a two-dimensional DNA origami array.

    PubMed

    Jonoska, N; Seeman, N C

    2015-07-28

    We propose a design for programmed molecular interactions that continuously change molecular arrangements in a predesigned manner. We introduce a model where environmental control through laser illumination allows platform attachment/detachment oscillations between two floating molecular species. The platform is a two-dimensional DNA origami array of tiles decorated with strands that provide both, the floating molecular tiles to attach and to pass communicating signals to neighbouring array tiles. In particular, we show how algorithmic molecular interactions can control cyclic molecular arrangements by exhibiting a system that can simulate the dynamics similar to two-dimensional cellular automata on a DNA origami array platform. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  7. Discrete Logic Modelling Optimization to Contextualize Prior Knowledge Networks Using PRUNET

    PubMed Central

    Androsova, Ganna; del Sol, Antonio

    2015-01-01

    High-throughput technologies have led to the generation of an increasing amount of data in different areas of biology. Datasets capturing the cell’s response to its intra- and extra-cellular microenvironment allows such data to be incorporated as signed and directed graphs or influence networks. These prior knowledge networks (PKNs) represent our current knowledge of the causality of cellular signal transduction. New signalling data is often examined and interpreted in conjunction with PKNs. However, different biological contexts, such as cell type or disease states, may have distinct variants of signalling pathways, resulting in the misinterpretation of new data. The identification of inconsistencies between measured data and signalling topologies, as well as the training of PKNs using context specific datasets (PKN contextualization), are necessary conditions to construct reliable, predictive models, which are current challenges in the systems biology of cell signalling. Here we present PRUNET, a user-friendly software tool designed to address the contextualization of a PKNs to specific experimental conditions. As the input, the algorithm takes a PKN and the expression profile of two given stable steady states or cellular phenotypes. The PKN is iteratively pruned using an evolutionary algorithm to perform an optimization process. This optimization rests in a match between predicted attractors in a discrete logic model (Boolean) and a Booleanized representation of the phenotypes, within a population of alternative subnetworks that evolves iteratively. We validated the algorithm applying PRUNET to four biological examples and using the resulting contextualized networks to predict missing expression values and to simulate well-characterized perturbations. PRUNET constitutes a tool for the automatic curation of a PKN to make it suitable for describing biological processes under particular experimental conditions. The general applicability of the implemented algorithm makes PRUNET suitable for a variety of biological processes, for instance cellular reprogramming or transitions between healthy and disease states. PMID:26058016

  8. Adaptive Subframe Partitioning and Efficient Packet Scheduling in OFDMA Cellular System with Fixed Decode-and-Forward Relays

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Ji, Yusheng; Liu, Fuqiang

    The integration of multihop relays with orthogonal frequency-division multiple access (OFDMA) cellular infrastructures can meet the growing demands for better coverage and higher throughput. Resource allocation in the OFDMA two-hop relay system is more complex than that in the conventional single-hop OFDMA system. With time division between transmissions from the base station (BS) and those from relay stations (RSs), fixed partitioning of the BS subframe and RS subframes can not adapt to various traffic demands. Moreover, single-hop scheduling algorithms can not be used directly in the two-hop system. Therefore, we propose a semi-distributed algorithm called ASP to adjust the length of every subframe adaptively, and suggest two ways to extend single-hop scheduling algorithms into multihop scenarios: link-based and end-to-end approaches. Simulation results indicate that the ASP algorithm increases system utilization and fairness. The max carrier-to-interference ratio (Max C/I) and proportional fairness (PF) scheduling algorithms extended using the end-to-end approach obtain higher throughput than those using the link-based approach, but at the expense of more overhead for information exchange between the BS and RSs. The resource allocation scheme using ASP and end-to-end PF scheduling achieves a tradeoff between system throughput maximization and fairness.

  9. The application of dynamic programming in production planning

    NASA Astrophysics Data System (ADS)

    Wu, Run

    2017-05-01

    Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.

  10. A genetic algorithm for a bi-objective mathematical model for dynamic virtual cell formation problem

    NASA Astrophysics Data System (ADS)

    Moradgholi, Mostafa; Paydar, Mohammad Mahdi; Mahdavi, Iraj; Jouzdani, Javid

    2016-09-01

    Nowadays, with the increasing pressure of the competitive business environment and demand for diverse products, manufacturers are force to seek for solutions that reduce production costs and rise product quality. Cellular manufacturing system (CMS), as a means to this end, has been a point of attraction to both researchers and practitioners. Limitations of cell formation problem (CFP), as one of important topics in CMS, have led to the introduction of virtual CMS (VCMS). This research addresses a bi-objective dynamic virtual cell formation problem (DVCFP) with the objective of finding the optimal formation of cells, considering the material handling costs, fixed machine installation costs and variable production costs of machines and workforce. Furthermore, we consider different skills on different machines in workforce assignment in a multi-period planning horizon. The bi-objective model is transformed to a single-objective fuzzy goal programming model and to show its performance; numerical examples are solved using the LINGO software. In addition, genetic algorithm (GA) is customized to tackle large-scale instances of the problems to show the performance of the solution method.

  11. Cellular image segmentation using n-agent cooperative game theory

    NASA Astrophysics Data System (ADS)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  12. Active module identification in intracellular networks using a memetic algorithm with a new binary decoding scheme.

    PubMed

    Li, Dong; Pan, Zhisong; Hu, Guyu; Zhu, Zexuan; He, Shan

    2017-03-14

    Active modules are connected regions in biological network which show significant changes in expression over particular conditions. The identification of such modules is important since it may reveal the regulatory and signaling mechanisms that associate with a given cellular response. In this paper, we propose a novel active module identification algorithm based on a memetic algorithm. We propose a novel encoding/decoding scheme to ensure the connectedness of the identified active modules. Based on the scheme, we also design and incorporate a local search operator into the memetic algorithm to improve its performance. The effectiveness of proposed algorithm is validated on both small and large protein interaction networks.

  13. Multiple objects tracking in fluorescence microscopy.

    PubMed

    Kalaidzidis, Yannis

    2009-01-01

    Many processes in cell biology are connected to the movement of compact entities: intracellular vesicles and even single molecules. The tracking of individual objects is important for understanding cellular dynamics. Here we describe the tracking algorithms which have been developed in the non-biological fields and successfully applied to object detection and tracking in biological applications. The characteristics features of the different algorithms are compared.

  14. Portfolio optimization by using linear programing models based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  15. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  16. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  17. Low Power S-Box Architecture for AES Algorithm using Programmable Second Order Reversible Cellular Automata: An Application to WBAN.

    PubMed

    Gangadari, Bhoopal Rao; Ahamed, Shaik Rafi

    2016-12-01

    In this paper, we presented a novel approach of low energy consumption architecture of S-Box used in Advanced Encryption Standard (AES) algorithm using programmable second order reversible cellular automata (RCA 2 ). The architecture entails a low power implementation with minimal delay overhead and the performance of proposed RCA 2 based S-Box in terms of security is evaluated using the cryptographic properties such as nonlinearity, correlation immunity bias, strict avalanche criteria, entropy and also found that the proposed architecture is secure enough for cryptographic applications. Moreover, the proposed AES algorithm architecture simulation studies show that energy consumption of 68.726 nJ, power dissipation of 3.856 mW for 0.18- μm at 13.69 MHz and energy consumption of 29.408 nJ, power dissipation of 1.65 mW for 0.13- μm at 13.69 MHz. The proposed AES algorithm with RCA 2 based S-Box shows a reduction power consumption by 50 % and energy consumption by 5 % compared to best classical S-Box and composite field arithmetic based AES algorithm. Apart from that, it is also shown that RCA 2 based S-Boxes are dynamic in nature, invertible, low power dissipation compared to that of LUT based S-Box and hence suitable for Wireless Body Area Network (WBAN) applications.

  18. Sequenza: allele-specific copy number and mutation profiles from tumor sequencing data.

    PubMed

    Favero, F; Joshi, T; Marquard, A M; Birkbak, N J; Krzystanek, M; Li, Q; Szallasi, Z; Eklund, A C

    2015-01-01

    Exome or whole-genome deep sequencing of tumor DNA along with paired normal DNA can potentially provide a detailed picture of the somatic mutations that characterize the tumor. However, analysis of such sequence data can be complicated by the presence of normal cells in the tumor specimen, by intratumor heterogeneity, and by the sheer size of the raw data. In particular, determination of copy number variations from exome sequencing data alone has proven difficult; thus, single nucleotide polymorphism (SNP) arrays have often been used for this task. Recently, algorithms to estimate absolute, but not allele-specific, copy number profiles from tumor sequencing data have been described. We developed Sequenza, a software package that uses paired tumor-normal DNA sequencing data to estimate tumor cellularity and ploidy, and to calculate allele-specific copy number profiles and mutation profiles. We applied Sequenza, as well as two previously published algorithms, to exome sequence data from 30 tumors from The Cancer Genome Atlas. We assessed the performance of these algorithms by comparing their results with those generated using matched SNP arrays and processed by the allele-specific copy number analysis of tumors (ASCAT) algorithm. Comparison between Sequenza/exome and SNP/ASCAT revealed strong correlation in cellularity (Pearson's r = 0.90) and ploidy estimates (r = 0.42, or r = 0.94 after manual inspecting alternative solutions). This performance was noticeably superior to previously published algorithms. In addition, in artificial data simulating normal-tumor admixtures, Sequenza detected the correct ploidy in samples with tumor content as low as 30%. The agreement between Sequenza and SNP array-based copy number profiles suggests that exome sequencing alone is sufficient not only for identifying small scale mutations but also for estimating cellularity and inferring DNA copy number aberrations. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology.

  19. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  20. Super-Resolution Algorithm in Cumulative Virtual Blanking

    NASA Astrophysics Data System (ADS)

    Montillet, J. P.; Meng, X.; Roberts, G. W.; Woolfson, M. S.

    2008-11-01

    The proliferation of mobile devices and the emergence of wireless location-based services have generated consumer demand for precise location. In this paper, the MUSIC super-resolution algorithm is applied to time delay estimation for positioning purposes in cellular networks. The goal is to position a Mobile Station with UMTS technology. The problem of Base-Stations herability is solved using Cumulative Virtual Blanking. A simple simulator is presented using DS-SS signal. The results show that MUSIC algorithm improves the time delay estimation in both the cases whether or not Cumulative Virtual Blanking was carried out.

  1. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    NASA Astrophysics Data System (ADS)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  2. The PlusCal Algorithm Language

    NASA Astrophysics Data System (ADS)

    Lamport, Leslie

    Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.

  3. Simultaneous Identification of Multiple Driver Pathways in Cancer

    PubMed Central

    Leiserson, Mark D. M.; Blokh, Dima

    2013-01-01

    Distinguishing the somatic mutations responsible for cancer (driver mutations) from random, passenger mutations is a key challenge in cancer genomics. Driver mutations generally target cellular signaling and regulatory pathways consisting of multiple genes. This heterogeneity complicates the identification of driver mutations by their recurrence across samples, as different combinations of mutations in driver pathways are observed in different samples. We introduce the Multi-Dendrix algorithm for the simultaneous identification of multiple driver pathways de novo in somatic mutation data from a cohort of cancer samples. The algorithm relies on two combinatorial properties of mutations in a driver pathway: high coverage and mutual exclusivity. We derive an integer linear program that finds set of mutations exhibiting these properties. We apply Multi-Dendrix to somatic mutations from glioblastoma, breast cancer, and lung cancer samples. Multi-Dendrix identifies sets of mutations in genes that overlap with known pathways – including Rb, p53, PI(3)K, and cell cycle pathways – and also novel sets of mutually exclusive mutations, including mutations in several transcription factors or other genes involved in transcriptional regulation. These sets are discovered directly from mutation data with no prior knowledge of pathways or gene interactions. We show that Multi-Dendrix outperforms other algorithms for identifying combinations of mutations and is also orders of magnitude faster on genome-scale data. Software available at: http://compbio.cs.brown.edu/software. PMID:23717195

  4. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  5. Smart Bandwidth Assignation in an Underlay Cellular Network for Internet of Vehicles.

    PubMed

    de la Iglesia, Idoia; Hernandez-Jayo, Unai; Osaba, Eneko; Carballedo, Roberto

    2017-09-27

    The evolution of the IoT (Internet of Things) paradigm applied to new scenarios as VANETs (Vehicular Ad Hoc Networks) has gained momentum in recent years. Both academia and industry have triggered advanced studies in the IoV (Internet of Vehicles), which is understood as an ecosystem where different types of users (vehicles, elements of the infrastructure, pedestrians) are connected. How to efficiently share the available radio resources among the different types of eligible users is one of the important issues to be addressed. This paper briefly analyzes various concepts presented hitherto in the literature and it proposes an enhanced algorithm for ensuring a robust co-existence of the aforementioned system users. Therefore, this paper introduces an underlay RRM (Radio Resource Management) methodology which is capable of (1) improving cellular spectral efficiency while making a minimal impact on cellular communications and (2) ensuring the different QoS (Quality of Service) requirements of ITS (Intelligent Transportation Systems) applications. Simulation results, where we compare the proposed algorithm to the other two RRM, show the promising spectral efficiency performance of the proposed RRM methodology.

  6. Smart Bandwidth Assignation in an Underlay Cellular Network for Internet of Vehicles

    PubMed Central

    de la Iglesia, Idoia; Hernandez-Jayo, Unai

    2017-01-01

    The evolution of the IoT (Internet of Things) paradigm applied to new scenarios as VANETs (Vehicular Ad Hoc Networks) has gained momentum in recent years. Both academia and industry have triggered advanced studies in the IoV (Internet of Vehicles), which is understood as an ecosystem where different types of users (vehicles, elements of the infrastructure, pedestrians) are connected. How to efficiently share the available radio resources among the different types of eligible users is one of the important issues to be addressed. This paper briefly analyzes various concepts presented hitherto in the literature and it proposes an enhanced algorithm for ensuring a robust co-existence of the aforementioned system users. Therefore, this paper introduces an underlay RRM (Radio Resource Management) methodology which is capable of (1) improving cellular spectral efficiency while making a minimal impact on cellular communications and (2) ensuring the different QoS (Quality of Service) requirements of ITS (Intelligent Transportation Systems) applications. Simulation results, where we compare the proposed algorithm to the other two RRM, show the promising spectral efficiency performance of the proposed RRM methodology. PMID:28953256

  7. Dynamic cellular manufacturing system considering machine failure and workload balance

    NASA Astrophysics Data System (ADS)

    Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad

    2018-02-01

    Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.

  8. AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.

    PubMed

    Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou

    2017-01-01

    In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.

  9. Genetic algorithms using SISAL parallel programming language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tejada, S.

    1994-05-06

    Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.

  10. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    NASA Astrophysics Data System (ADS)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  11. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  12. New cellular automaton model for magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Chen, Hudong; Matthaeus, William H.

    1987-01-01

    A new type of two-dimensional cellular automation method is introduced for computation of magnetohydrodynamic fluid systems. Particle population is described by a 36-component tensor referred to a hexagonal lattice. By appropriate choice of the coefficients that control the modified streaming algorithm and the definition of the macroscopic fields, it is possible to compute both Lorentz-force and magnetic-induction effects. The method is local in the microscopic space and therefore suited to massively parallel computations.

  13. Experimental and simulation studies of multivariable adaptive optimization of continuous bioreactors using bilevel forgetting factors.

    PubMed

    Chang, Y K; Lim, H C

    1989-08-20

    A multivariable on-line adaptive optimization algorithm using a bilevel forgetting factor method was developed and applied to a continuous baker's yeast culture in simulation and experimental studies to maximize the cellular productivity by manipulating the dilution rate and the temperature. The algorithm showed a good optimization speed and a good adaptability and reoptimization capability. The algorithm was able to stably maintain the process around the optimum point for an extended period of time. Two cases were investigated: an unconstrained and a constrained optimization. In the constrained optimization the ethanol concentration was used as an index for the baking quality of yeast cells. An equality constraint with a quadratic penalty was imposed on the ethanol concentration to keep its level close to a hypothetical "optimum" value. The developed algorithm was experimentally applied to a baker's yeast culture to demonstrate its validity. Only unconstrained optimization was carried out experimentally. A set of tuning parameter values was suggested after evaluating the results from several experimental runs. With those tuning parameter values the optimization took 50-90 h. At the attained steady state the dilution rate was 0.310 h(-1) the temperature 32.8 degrees C, and the cellular productivity 1.50 g/L/h.

  14. Synchronous versus asynchronous modeling of gene regulatory networks.

    PubMed

    Garg, Abhishek; Di Cara, Alessandro; Xenarios, Ioannis; Mendoza, Luis; De Micheli, Giovanni

    2008-09-01

    In silico modeling of gene regulatory networks has gained some momentum recently due to increased interest in analyzing the dynamics of biological systems. This has been further facilitated by the increasing availability of experimental data on gene-gene, protein-protein and gene-protein interactions. The two dynamical properties that are often experimentally testable are perturbations and stable steady states. Although a lot of work has been done on the identification of steady states, not much work has been reported on in silico modeling of cellular differentiation processes. In this manuscript, we provide algorithms based on reduced ordered binary decision diagrams (ROBDDs) for Boolean modeling of gene regulatory networks. Algorithms for synchronous and asynchronous transition models have been proposed and their corresponding computational properties have been analyzed. These algorithms allow users to compute cyclic attractors of large networks that are currently not feasible using existing software. Hereby we provide a framework to analyze the effect of multiple gene perturbation protocols, and their effect on cell differentiation processes. These algorithms were validated on the T-helper model showing the correct steady state identification and Th1-Th2 cellular differentiation process. The software binaries for Windows and Linux platforms can be downloaded from http://si2.epfl.ch/~garg/genysis.html.

  15. Cellular neural networks, the Navier-Stokes equation, and microarray image reconstruction.

    PubMed

    Zineddin, Bachar; Wang, Zidong; Liu, Xiaohui

    2011-11-01

    Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier-Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time.

  16. The design and performance characteristics of a cellular logic 3-D image classification processor

    NASA Astrophysics Data System (ADS)

    Ankeney, L. A.

    1981-04-01

    The introduction of high resolution scanning laser radar systems which are capable of collecting range and reflectivity images, is predicted to significantly influence the development of processors capable of performing autonomous target classification tasks. Actively sensed range images are shown to be superior to passively collected infrared images in both image stability and information content. An illustrated tutorial introduces cellular logic (neighborhood) transformations and two and three dimensional erosion and dilation operations which are used for noise filters and geometric shape measurement. A unique 'cookbook' approach to selecting a sequence of neighborhood transformations suitable for object measurement is developed and related to false alarm rate and algorithm effectiveness measures. The cookbook design approach is used to develop an algorithm to classify objects based upon their 3-D geometrical features. A Monte Carlo performance analysis is used to demonstrate the utility of the design approach by characterizing the ability of the algorithm to classify randomly positioned three dimensional objects in the presence of additive noise, scale variations, and other forms of image distortion.

  17. Design of cryptographically secure AES like S-Box using second-order reversible cellular automata for wireless body area network applications.

    PubMed

    Gangadari, Bhoopal Rao; Rafi Ahamed, Shaik

    2016-09-01

    In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA 2 ) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA 2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA 2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA 2 based S-Box have comparatively better performance than that of conventional LUT based S-Box.

  18. Design of cryptographically secure AES like S-Box using second-order reversible cellular automata for wireless body area network applications

    PubMed Central

    Rafi Ahamed, Shaik

    2016-01-01

    In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA2) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA2 based S-Box have comparatively better performance than that of conventional LUT based S-Box. PMID:27733924

  19. An algorithm-based topographical biomaterials library to instruct cell fate

    PubMed Central

    Unadkat, Hemant V.; Hulsman, Marc; Cornelissen, Kamiel; Papenburg, Bernke J.; Truckenmüller, Roman K.; Carpenter, Anne E.; Wessling, Matthias; Post, Gerhard F.; Uetz, Marc; Reinders, Marcel J. T.; Stamatialis, Dimitrios; van Blitterswijk, Clemens A.; de Boer, Jan

    2011-01-01

    It is increasingly recognized that material surface topography is able to evoke specific cellular responses, endowing materials with instructive properties that were formerly reserved for growth factors. This opens the window to improve upon, in a cost-effective manner, biological performance of any surface used in the human body. Unfortunately, the interplay between surface topographies and cell behavior is complex and still incompletely understood. Rational approaches to search for bioactive surfaces will therefore omit previously unperceived interactions. Hence, in the present study, we use mathematical algorithms to design nonbiased, random surface features and produce chips of poly(lactic acid) with 2,176 different topographies. With human mesenchymal stromal cells (hMSCs) grown on the chips and using high-content imaging, we reveal unique, formerly unknown, surface topographies that are able to induce MSC proliferation or osteogenic differentiation. Moreover, we correlate parameters of the mathematical algorithms to cellular responses, which yield novel design criteria for these particular parameters. In conclusion, we demonstrate that randomized libraries of surface topographies can be broadly applied to unravel the interplay between cells and surface topography and to find improved material surfaces. PMID:21949368

  20. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  1. 76 FR 81513 - Cellular, Tissue, and Gene Therapies Advisory Committee; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-28

    ...] Cellular, Tissue, and Gene Therapies Advisory Committee; Notice of Meeting AGENCY: Food and Drug... meeting will be closed to the public. Name of Committee: Cellular, Tissue, and Gene Therapies Advisory... programs in the Cellular and Tissue Branch, Office of Cellular, Tissue and Gene Therapies, Center for...

  2. Graphical programming interface: A development environment for MRI methods.

    PubMed

    Zwart, Nicholas R; Pipe, James G

    2015-11-01

    To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.

  3. US-VISIT Identity Matching Algorithm Evaluation Program: ADIS Algorithm Evaluation Project Plan Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, C W; Lenderman, J S; Gansemer, J D

    This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed bymore » Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).« less

  4. Overview of implementation of DARPA GPU program in SAIC

    NASA Astrophysics Data System (ADS)

    Braunreiter, Dennis; Furtek, Jeremy; Chen, Hai-Wen; Healy, Dennis

    2008-04-01

    This paper reviews the implementation of DARPA MTO STAP-BOY program for both Phase I and II conducted at Science Applications International Corporation (SAIC). The STAP-BOY program conducts fast covariance factorization and tuning techniques for space-time adaptive process (STAP) Algorithm Implementation on Graphics Processor unit (GPU) Architectures for Embedded Systems. The first part of our presentation on the DARPA STAP-BOY program will focus on GPU implementation and algorithm innovations for a prototype radar STAP algorithm. The STAP algorithm will be implemented on the GPU, using stream programming (from companies such as PeakStream, ATI Technologies' CTM, and NVIDIA) and traditional graphics APIs. This algorithm will include fast range adaptive STAP weight updates and beamforming applications, each of which has been modified to exploit the parallel nature of graphics architectures.

  5. A real time microcomputer implementation of sensor failure detection for turbofan engines

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1989-01-01

    An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.

  6. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  7. Signal processing for molecular and cellular biological physics: an emerging field.

    PubMed

    Little, Max A; Jones, Nick S

    2013-02-13

    Recent advances in our ability to watch the molecular and cellular processes of life in action--such as atomic force microscopy, optical tweezers and Forster fluorescence resonance energy transfer--raise challenges for digital signal processing (DSP) of the resulting experimental data. This article explores the unique properties of such biophysical time series that set them apart from other signals, such as the prevalence of abrupt jumps and steps, multi-modal distributions and autocorrelated noise. It exposes the problems with classical linear DSP algorithms applied to this kind of data, and describes new nonlinear and non-Gaussian algorithms that are able to extract information that is of direct relevance to biological physicists. It is argued that these new methods applied in this context typify the nascent field of biophysical DSP. Practical experimental examples are supplied.

  8. Signal processing for molecular and cellular biological physics: an emerging field

    PubMed Central

    Little, Max A.; Jones, Nick S.

    2013-01-01

    Recent advances in our ability to watch the molecular and cellular processes of life in action—such as atomic force microscopy, optical tweezers and Forster fluorescence resonance energy transfer—raise challenges for digital signal processing (DSP) of the resulting experimental data. This article explores the unique properties of such biophysical time series that set them apart from other signals, such as the prevalence of abrupt jumps and steps, multi-modal distributions and autocorrelated noise. It exposes the problems with classical linear DSP algorithms applied to this kind of data, and describes new nonlinear and non-Gaussian algorithms that are able to extract information that is of direct relevance to biological physicists. It is argued that these new methods applied in this context typify the nascent field of biophysical DSP. Practical experimental examples are supplied. PMID:23277603

  9. Modeling of fracture of protective concrete structures under impact loads

    NASA Astrophysics Data System (ADS)

    Radchenko, P. A.; Batuev, S. P.; Radchenko, A. V.; Plevkov, V. S.

    2015-10-01

    This paper presents results of numerical simulation of interaction between a Boeing 747-400 aircraft and the protective shell of a nuclear power plant. The shell is presented as a complex multilayered cellular structure consisting of layers of concrete and fiber concrete bonded with steel trusses. Numerical simulation was performed three-dimensionally using the original algorithm and software taking into account algorithms for building grids of complex geometric objects and parallel computations. Dynamics of the stress-strain state and fracture of the structure were studied. Destruction is described using a two-stage model that allows taking into account anisotropy of elastic and strength properties of concrete and fiber concrete. It is shown that wave processes initiate destruction of the cellular shell structure; cells start to destruct in an unloading wave originating after the compression wave arrival at free cell surfaces.

  10. Block clustering based on difference of convex functions (DC) programming and DC algorithms.

    PubMed

    Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai

    2013-10-01

    We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.

  11. Design of Efficient Mirror Adder in Quantum- Dot Cellular Automata

    NASA Astrophysics Data System (ADS)

    Mishra, Prashant Kumar; Chattopadhyay, Manju K.

    2018-03-01

    Lower power consumption is an essential demand for portable multimedia system using digital signal processing algorithms and architectures. Quantum dot cellular automata (QCA) is a rising nano technology for the development of high performance ultra-dense low power digital circuits. QCA based several efficient binary and decimal arithmetic circuits are implemented, however important improvements are still possible. This paper demonstrate Mirror Adder circuit design in QCA. We present comparative study of mirror adder cells designed using conventional CMOS technique and mirror adder cells designed using quantum-dot cellular automata. QCA based mirror adders are better in terms of area by order of three.

  12. Computing aggregate properties of preimages for 2D cellular automata

    NASA Astrophysics Data System (ADS)

    Beer, Randall D.

    2017-11-01

    Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm—incremental aggregation—that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.

  13. Cooperative Game-Based Energy Efficiency Management over Ultra-Dense Wireless Cellular Networks

    PubMed Central

    Li, Ming; Chen, Pengpeng; Gao, Shouwan

    2016-01-01

    Ultra-dense wireless cellular networks have been envisioned as a promising technique for handling the explosive increase of wireless traffic volume. With the extensive deployment of small cells in wireless cellular networks, the network spectral efficiency (SE) is improved with the use of limited frequency. However, the mutual inter-tier and intra-tier interference between or among small cells and macro cells becomes serious. On the other hand, more chances for potential cooperation among different cells are introduced. Energy efficiency (EE) has become one of the most important problems for future wireless networks. This paper proposes a cooperative bargaining game-based method for comprehensive EE management in an ultra-dense wireless cellular network, which highlights the complicated interference influence on energy-saving challenges and the power-coordination process among small cells and macro cells. Especially, a unified EE utility with the consideration of the interference mitigation is proposed to jointly address the SE, the deployment efficiency (DE), and the EE. In particular, closed-form power-coordination solutions for the optimal EE are derived to show the convergence property of the algorithm. Moreover, a simplified algorithm is presented to reduce the complexity of the signaling overhead, which is significant for ultra-dense small cells. Finally, numerical simulations are provided to illustrate the efficiency of the proposed cooperative bargaining game-based and simplified schemes. PMID:27649170

  14. Cooperative Game-Based Energy Efficiency Management over Ultra-Dense Wireless Cellular Networks.

    PubMed

    Li, Ming; Chen, Pengpeng; Gao, Shouwan

    2016-09-13

    Ultra-dense wireless cellular networks have been envisioned as a promising technique for handling the explosive increase of wireless traffic volume. With the extensive deployment of small cells in wireless cellular networks, the network spectral efficiency (SE) is improved with the use of limited frequency. However, the mutual inter-tier and intra-tier interference between or among small cells and macro cells becomes serious. On the other hand, more chances for potential cooperation among different cells are introduced. Energy efficiency (EE) has become one of the most important problems for future wireless networks. This paper proposes a cooperative bargaining game-based method for comprehensive EE management in an ultra-dense wireless cellular network, which highlights the complicated interference influence on energy-saving challenges and the power-coordination process among small cells and macro cells. Especially, a unified EE utility with the consideration of the interference mitigation is proposed to jointly address the SE, the deployment efficiency (DE), and the EE. In particular, closed-form power-coordination solutions for the optimal EE are derived to show the convergence property of the algorithm. Moreover, a simplified algorithm is presented to reduce the complexity of the signaling overhead, which is significant for ultra-dense small cells. Finally, numerical simulations are provided to illustrate the efficiency of the proposed cooperative bargaining game-based and simplified schemes.

  15. Evolution, learning, and cognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Y.C.

    1988-01-01

    The book comprises more than fifteen articles in the areas of neural networks and connectionist systems, classifier systems, adaptive network systems, genetic algorithm, cellular automata, artificial immune systems, evolutionary genetics, cognitive science, optical computing, combinatorial optimization, and cybernetics.

  16. Plasmid mapping computer program.

    PubMed Central

    Nolan, G P; Maina, C V; Szalay, A A

    1984-01-01

    Three new computer algorithms are described which rapidly order the restriction fragments of a plasmid DNA which has been cleaved with two restriction endonucleases in single and double digestions. Two of the algorithms are contained within a single computer program (called MPCIRC). The Rule-Oriented algorithm, constructs all logical circular map solutions within sixty seconds (14 double-digestion fragments) when used in conjunction with the Permutation method. The program is written in Apple Pascal and runs on an Apple II Plus Microcomputer with 64K of memory. A third algorithm is described which rapidly maps double digests and uses the above two algorithms as adducts. Modifications of the algorithms for linear mapping are also presented. PMID:6320105

  17. A Program Complexity Metric Based on Variable Usage for Algorithmic Thinking Education of Novice Learners

    ERIC Educational Resources Information Center

    Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.

    2015-01-01

    We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…

  18. The Impact of Online Algorithm Visualization on ICT Students' Achievements in Introduction to Programming Course

    ERIC Educational Resources Information Center

    Saltan, Fatih

    2017-01-01

    Online Algorithm Visualization (OAV) is one of the recent developments in the instructional technology field that aims to help students handle difficulties faced when they begin to learn programming. This study aims to investigate the effect of online algorithm visualization on students' achievement in the introduction to programming course. To…

  19. Application of ischemic postconditioning's algorithms in tissues protection: response to methodological gaps in preclinical and clinical studies.

    PubMed

    Feyzizadeh, Saeid; Badalzadeh, Reza

    2017-10-01

    Ischaemic postconditioning (IPostC) was introduced for the first time by Zhao et al. as a feasible method for reduction of myocardial ischaemia-reperfusion (IR) injury. The cardioprotection by this protocol has been extensively evaluated in various species. Then, further research revealed that IPostC is a safe and convenient approach in limiting IR injury of non-myocardial tissues such as lung, liver, kidney, intestine, skeletal muscle, brain and spinal cord. IPostC has been conducted with different algorithms, resulting in diverse effects. The possible important factors leading to these differences are the difference in activation levels of signalling pathways and protective mediators by any algorithm, presence or absence of IPostC effectors in each tissue, or intrinsic characteristics of the tissues as well as the methodological biases. Also, the conflicting results have been shown with the application of the same algorithm of IPostC in certain tissues or animal species. The effectiveness of IPostC may depend upon various parameters including the species and the tissues characteristics. For example, different heart rates and metabolic rates of the species and unequal amounts of perfusion and blood flow of the tissues should be considered as the important determinants of IPostC effectiveness and should be thought about in designing IPostC algorithms for future studies. Due to these discrepancies, there is still no optimal single IPostC algorithm applicable to any tissue or any species. This issue is the main topic of the present article. © 2017 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.

  20. Urban Growth Modeling Using Cellular Automata with Multi-Temporal Remote Sensing Images Calibrated by the Artificial Bee Colony Optimization Algorithm.

    PubMed

    Naghibi, Fereydoun; Delavar, Mahmoud Reza; Pijanowski, Bryan

    2016-12-14

    Cellular Automata (CA) is one of the most common techniques used to simulate the urbanization process. CA-based urban models use transition rules to deliver spatial patterns of urban growth and urban dynamics over time. Determining the optimum transition rules of the CA is a critical step because of the heterogeneity and nonlinearities existing among urban growth driving forces. Recently, new CA models integrated with optimization methods based on swarm intelligence algorithms were proposed to overcome this drawback. The Artificial Bee Colony (ABC) algorithm is an advanced meta-heuristic swarm intelligence-based algorithm. Here, we propose a novel CA-based urban change model that uses the ABC algorithm to extract optimum transition rules. We applied the proposed ABC-CA model to simulate future urban growth in Urmia (Iran) with multi-temporal Landsat images from 1997, 2006 and 2015. Validation of the simulation results was made through statistical methods such as overall accuracy, the figure of merit and total operating characteristics (TOC). Additionally, we calibrated the CA model by ant colony optimization (ACO) to assess the performance of our proposed model versus similar swarm intelligence algorithm methods. We showed that the overall accuracy and the figure of merit of the ABC-CA model are 90.1% and 51.7%, which are 2.9% and 8.8% higher than those of the ACO-CA model, respectively. Moreover, the allocation disagreement of the simulation results for the ABC-CA model is 9.9%, which is 2.9% less than that of the ACO-CA model. Finally, the ABC-CA model also outperforms the ACO-CA model with fewer quantity and allocation errors and slightly more hits.

  1. Urban Growth Modeling Using Cellular Automata with Multi-Temporal Remote Sensing Images Calibrated by the Artificial Bee Colony Optimization Algorithm

    PubMed Central

    Naghibi, Fereydoun; Delavar, Mahmoud Reza; Pijanowski, Bryan

    2016-01-01

    Cellular Automata (CA) is one of the most common techniques used to simulate the urbanization process. CA-based urban models use transition rules to deliver spatial patterns of urban growth and urban dynamics over time. Determining the optimum transition rules of the CA is a critical step because of the heterogeneity and nonlinearities existing among urban growth driving forces. Recently, new CA models integrated with optimization methods based on swarm intelligence algorithms were proposed to overcome this drawback. The Artificial Bee Colony (ABC) algorithm is an advanced meta-heuristic swarm intelligence-based algorithm. Here, we propose a novel CA-based urban change model that uses the ABC algorithm to extract optimum transition rules. We applied the proposed ABC-CA model to simulate future urban growth in Urmia (Iran) with multi-temporal Landsat images from 1997, 2006 and 2015. Validation of the simulation results was made through statistical methods such as overall accuracy, the figure of merit and total operating characteristics (TOC). Additionally, we calibrated the CA model by ant colony optimization (ACO) to assess the performance of our proposed model versus similar swarm intelligence algorithm methods. We showed that the overall accuracy and the figure of merit of the ABC-CA model are 90.1% and 51.7%, which are 2.9% and 8.8% higher than those of the ACO-CA model, respectively. Moreover, the allocation disagreement of the simulation results for the ABC-CA model is 9.9%, which is 2.9% less than that of the ACO-CA model. Finally, the ABC-CA model also outperforms the ACO-CA model with fewer quantity and allocation errors and slightly more hits. PMID:27983633

  2. Solving a class of generalized fractional programming problems using the feasibility of linear programs.

    PubMed

    Shen, Peiping; Zhang, Tongli; Wang, Chunfeng

    2017-01-01

    This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.

  3. Coverage Extension and Balancing the Transmitted Power of the Moving Relay Node at LTE-A Cellular Network

    PubMed Central

    Aldhaibani, Jaafar A.; Yahya, Abid; Ahmad, R. Badlishah

    2014-01-01

    The poor capacity at cell boundaries is not enough to meet the growing demand and stringent design which required high capacity and throughput irrespective of user's location in the cellular network. In this paper, we propose new schemes for an optimum fixed relay node (RN) placement in LTE-A cellular network to enhance throughput and coverage extension at cell edge region. The proposed approach mitigates interferences between all nodes and ensures optimum utilization with the optimization of transmitted power. Moreover, we proposed a new algorithm to balance the transmitted power of moving relay node (MR) over cell size and providing required SNR and throughput at the users inside vehicle along with reducing the transmitted power consumption by MR. The numerical analysis along with the simulation results indicates that an improvement in capacity for users is 40% increment at downlink transmission from cell capacity. Furthermore, the results revealed that there is saving nearly 75% from transmitted power in MR after using proposed balancing algorithm. ATDI simulator was used to verify the numerical results, which deals with real digital cartographic and standard formats for terrain. PMID:24672378

  4. Coverage extension and balancing the transmitted power of the moving relay node at LTE-A cellular network.

    PubMed

    Aldhaibani, Jaafar A; Yahya, Abid; Ahmad, R Badlishah

    2014-01-01

    The poor capacity at cell boundaries is not enough to meet the growing demand and stringent design which required high capacity and throughput irrespective of user's location in the cellular network. In this paper, we propose new schemes for an optimum fixed relay node (RN) placement in LTE-A cellular network to enhance throughput and coverage extension at cell edge region. The proposed approach mitigates interferences between all nodes and ensures optimum utilization with the optimization of transmitted power. Moreover, we proposed a new algorithm to balance the transmitted power of moving relay node (MR) over cell size and providing required SNR and throughput at the users inside vehicle along with reducing the transmitted power consumption by MR. The numerical analysis along with the simulation results indicates that an improvement in capacity for users is 40% increment at downlink transmission from cell capacity. Furthermore, the results revealed that there is saving nearly 75% from transmitted power in MR after using proposed balancing algorithm. ATDI simulator was used to verify the numerical results, which deals with real digital cartographic and standard formats for terrain.

  5. Complexity Theory

    USGS Publications Warehouse

    Lee, William H K.

    2016-01-01

    A complex system consists of many interacting parts, generates new collective behavior through self organization, and adaptively evolves through time. Many theories have been developed to study complex systems, including chaos, fractals, cellular automata, self organization, stochastic processes, turbulence, and genetic algorithms.

  6. Strategies for using cellular automata to locate constrained layer damping on vibrating structures

    NASA Astrophysics Data System (ADS)

    Chia, C. M.; Rongong, J. A.; Worden, K.

    2009-01-01

    It is often hard to optimise constrained layer damping (CLD) for structures more complicated than simple beams and plates as its performance depends on its location, the shape of the applied patch, the mode shapes of the structure and the material properties. This paper considers the use of cellular automata (CA) in conjunction with finite element analysis to obtain an efficient coverage of CLD on structures. The effectiveness of several different sets of local rules governing the CA are compared against each other for a structure with known optimum coverage—namely a plate. The algorithm which attempts to replicate most closely known optimal configurations is considered the most successful. This algorithm is then used to generate an efficient CLD treatment that targets several modes of a curved composite panel. To validate the modelling approaches used, results are also presented of a comparison between theoretical and experimentally obtained modal properties of the damped curved panel.

  7. Numerical simulation of deformation and fracture of space protective shell structures from concrete and fiber concrete under pulse loading

    NASA Astrophysics Data System (ADS)

    Radchenko, P. A.; Batuev, S. P.; Radchenko, A. V.; Plevkov, V. S.

    2015-11-01

    This paper presents results of numerical simulation of interaction between aircraft Boeing 747-400 and protective shell of nuclear power plant. The shell is presented as complex multilayered cellular structure comprising layers of concrete and fiber concrete bonded with steel trusses. Numerical simulation was held three-dimensionally using the author's algorithm and software taking into account algorithms for building grids of complex geometric objects and parallel computations. The dynamics of stress-strain state and fracture of structure were studied. Destruction is described using two-stage model that allows taking into account anisotropy of elastic and strength properties of concrete and fiber concrete. It is shown that wave processes initiate destruction of shell cellular structure—cells start to destruct in unloading wave, originating after output of compression wave to the free surfaces of cells.

  8. Modeling of fracture of protective concrete structures under impact loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radchenko, P. A., E-mail: radchenko@live.ru; Batuev, S. P.; Radchenko, A. V.

    This paper presents results of numerical simulation of interaction between a Boeing 747-400 aircraft and the protective shell of a nuclear power plant. The shell is presented as a complex multilayered cellular structure consisting of layers of concrete and fiber concrete bonded with steel trusses. Numerical simulation was performed three-dimensionally using the original algorithm and software taking into account algorithms for building grids of complex geometric objects and parallel computations. Dynamics of the stress-strain state and fracture of the structure were studied. Destruction is described using a two-stage model that allows taking into account anisotropy of elastic and strength propertiesmore » of concrete and fiber concrete. It is shown that wave processes initiate destruction of the cellular shell structure; cells start to destruct in an unloading wave originating after the compression wave arrival at free cell surfaces.« less

  9. Cell segmentation in time-lapse fluorescence microscopy with temporally varying sub-cellular fusion protein patterns.

    PubMed

    Bunyak, Filiz; Palaniappan, Kannappan; Chagin, Vadim; Cardoso, M

    2009-01-01

    Fluorescently tagged proteins such as GFP-PCNA produce rich dynamically varying textural patterns of foci distributed in the nucleus. This enables the behavioral study of sub-cellular structures during different phases of the cell cycle. The varying punctuate patterns of fluorescence, drastic changes in SNR, shape and position during mitosis and abundance of touching cells, however, require more sophisticated algorithms for reliable automatic cell segmentation and lineage analysis. Since the cell nuclei are non-uniform in appearance, a distribution-based modeling of foreground classes is essential. The recently proposed graph partitioning active contours (GPAC) algorithm supports region descriptors and flexible distance metrics. We extend GPAC for fluorescence-based cell segmentation using regional density functions and dramatically improve its efficiency for segmentation from O(N(4)) to O(N(2)), for an image with N(2) pixels, making it practical and scalable for high throughput microscopy imaging studies.

  10. Ecological comparison of cellular stress responses among populations - normalizing RT-qPCR values to investigate differential environmental adaptations.

    PubMed

    Koenigstein, Stefan; Pöhlmann, Kevin; Held, Christoph; Abele, Doris

    2013-05-16

    Rising temperatures and other environmental factors influenced by global climate change can cause increased physiological stress for many species and lead to range shifts or regional population extinctions. To advance the understanding of species' response to change and establish links between individual and ecosystem adaptations, physiological reactions have to be compared between populations living in different environments. Although changes in expression of stress genes are relatively easy to quantify, methods for reliable comparison of the data remain a contentious issue. Using normalization algorithms and further methodological considerations, we compare cellular stress response gene expression levels measured by RT-qPCR after air exposure experiments among different subpopulations of three species of the intertidal limpet Nacella. Reference gene assessment algorithms reveal that stable reference genes can differ among investigated populations and / or treatment groups. Normalized expression values point to differential defense strategies to air exposure in the investigated populations, which either employ a pronounced cellular stress response in the inducible Hsp70 forms, or exhibit a comparatively high constitutive expression of Hsps (heat shock proteins) while showing only little response in terms of Hsp induction. This study serves as a case study to explore the methodological prerequisites of physiological stress response comparisons among ecologically and phylogenetically different organisms. To improve the reliability of gene expression data and compare the stress responses of subpopulations under potential genetic divergence, reference gene stability algorithms are valuable and necessary tools. As the Hsp70 isoforms have been shown to play different roles in the acute stress responses and increased constitutive defenses of populations in their different habitats, these comparative studies can yield insight into physiological strategies of adaptation to environmental stress and provide hints for the prudent use of the cellular stress response as a biomarker to study environmental stress and stress adaptation of populations under changing environmental conditions.

  11. Ecological comparison of cellular stress responses among populations – normalizing RT-qPCR values to investigate differential environmental adaptations

    PubMed Central

    2013-01-01

    Background Rising temperatures and other environmental factors influenced by global climate change can cause increased physiological stress for many species and lead to range shifts or regional population extinctions. To advance the understanding of species’ response to change and establish links between individual and ecosystem adaptations, physiological reactions have to be compared between populations living in different environments. Although changes in expression of stress genes are relatively easy to quantify, methods for reliable comparison of the data remain a contentious issue. Using normalization algorithms and further methodological considerations, we compare cellular stress response gene expression levels measured by RT-qPCR after air exposure experiments among different subpopulations of three species of the intertidal limpet Nacella. Results Reference gene assessment algorithms reveal that stable reference genes can differ among investigated populations and / or treatment groups. Normalized expression values point to differential defense strategies to air exposure in the investigated populations, which either employ a pronounced cellular stress response in the inducible Hsp70 forms, or exhibit a comparatively high constitutive expression of Hsps (heat shock proteins) while showing only little response in terms of Hsp induction. Conclusions This study serves as a case study to explore the methodological prerequisites of physiological stress response comparisons among ecologically and phylogenetically different organisms. To improve the reliability of gene expression data and compare the stress responses of subpopulations under potential genetic divergence, reference gene stability algorithms are valuable and necessary tools. As the Hsp70 isoforms have been shown to play different roles in the acute stress responses and increased constitutive defenses of populations in their different habitats, these comparative studies can yield insight into physiological strategies of adaptation to environmental stress and provide hints for the prudent use of the cellular stress response as a biomarker to study environmental stress and stress adaptation of populations under changing environmental conditions. PMID:23680017

  12. Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming.

    PubMed

    Wang, Haizhou; Song, Mingzhou

    2011-12-01

    The heuristic k -means algorithm, widely used for cluster analysis, does not guarantee optimality. We developed a dynamic programming algorithm for optimal one-dimensional clustering. The algorithm is implemented as an R package called Ckmeans.1d.dp . We demonstrate its advantage in optimality and runtime over the standard iterative k -means algorithm.

  13. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  14. PCSYS: The optimal design integration system picture drawing system with hidden line algorithm capability for aerospace vehicle configurations

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Vanderburg, J. D.

    1977-01-01

    A vehicle geometric definition based upon quadrilateral surface elements to produce realistic pictures of an aerospace vehicle. The PCSYS programs can be used to visually check geometric data input, monitor geometric perturbations, and to visualize the complex spatial inter-relationships between the internal and external vehicle components. PCSYS has two major component programs. The between program, IMAGE, draws a complex aerospace vehicle pictorial representation based on either an approximate but rapid hidden line algorithm or without any hidden line algorithm. The second program, HIDDEN, draws a vehicle representation using an accurate but time consuming hidden line algorithm.

  15. Communications oriented programming of parallel iterative solutions of sparse linear systems

    NASA Technical Reports Server (NTRS)

    Patrick, M. L.; Pratt, T. W.

    1986-01-01

    Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.

  16. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  17. A Reinforcement Sensor Embedded Vertical Handoff Controller for Vehicular Heterogeneous Wireless Networks

    PubMed Central

    Li, Limin; Xu, Yubin; Soong, Boon-Hee; Ma, Lin

    2013-01-01

    Vehicular communication platforms that provide real-time access to wireless networks have drawn more and more attention in recent years. IEEE 802.11p is the main radio access technology that supports communication for high mobility terminals, however, due to its limited coverage, IEEE 802.11p is usually deployed by coupling with cellular networks to achieve seamless mobility. In a heterogeneous cellular/802.11p network, vehicular communication is characterized by its short time span in association with a wireless local area network (WLAN). Moreover, for the media access control (MAC) scheme used for WLAN, the network throughput dramatically decreases with increasing user quantity. In response to these compelling problems, we propose a reinforcement sensor (RFS) embedded vertical handoff control strategy to support mobility management. The RFS has online learning capability and can provide optimal handoff decisions in an adaptive fashion without prior knowledge. The algorithm integrates considerations including vehicular mobility, traffic load, handoff latency, and network status. Simulation results verify that the proposed algorithm can adaptively adjust the handoff strategy, allowing users to stay connected to the best network. Furthermore, the algorithm can ensure that RSUs are adequate, thereby guaranteeing a high quality user experience. PMID:24193101

  18. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enghauser, Michael

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  19. Conformational landscape of an amyloid intra-cellular domain and Landau-Ginzburg-Wilson paradigm in protein dynamics

    NASA Astrophysics Data System (ADS)

    Dai, Jin; Niemi, Antti J.; He, Jianfeng

    2016-07-01

    The Landau-Ginzburg-Wilson paradigm is proposed as a framework, to investigate the conformational landscape of intrinsically unstructured proteins. A universal Cα-trace Landau free energy is deduced from general symmetry considerations, with the ensuing all-atom structure modeled using publicly available reconstruction programs Pulchra and Scwrl. As an example, the conformational stability of an amyloid precursor protein intra-cellular domain (AICD) is inspected; the reference conformation is the crystallographic structure with code 3DXC in Protein Data Bank (PDB) that describes a heterodimer of AICD and a nuclear multi-domain adaptor protein Fe65. Those conformations of AICD that correspond to local or near-local minima of the Landau free energy are identified. For this, the response of the original 3DXC conformation to variations in the ambient temperature is investigated, using the Glauber algorithm. The conclusion is that in isolation the AICD conformation in 3DXC must be unstable. A family of degenerate conformations that minimise the Landau free energy is identified, and it is proposed that the native state of an isolated AICD is a superposition of these conformations. The results are fully in line with the presumed intrinsically unstructured character of isolated AICD and should provide a basis for a systematic analysis of AICD structure in future NMR experiments.

  20. Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation

    DOE PAGES

    Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir

    2016-05-01

    We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.

  1. Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization

    DTIC Science & Technology

    2010-03-31

    optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested

  2. Time-dependent computational studies of flames in microgravity

    NASA Technical Reports Server (NTRS)

    Oran, Elaine S.; Kailasanath, K.

    1989-01-01

    The research performed at the Center for Reactive Flow and Dynamical Systems in the Laboratory for Computational Physics and Fluid Dynamics, at the Naval Research Laboratory, in support of the NASA Microgravity Science and Applications Program is described. The primary focus was on investigating fundamental questions concerning the propagation and extinction of premixed flames in Earth gravity and in microgravity environments. The approach was to use detailed time-dependent, multispecies, numerical models as tools to simulate flames in different gravity environments. The models include a detailed chemical kinetics mechanism consisting of elementary reactions among the eight reactive species involved in hydrogen combustion, coupled to algorithms for convection, thermal conduction, viscosity, molecular and thermal diffusion, and external forces. The external force, gravity, can be put in any direction relative to flame propagation and can have a range of values. A combination of one-dimensional and two-dimensional simulations was used to investigate the effects of curvature and dilution on ignition and propagation of flames, to help resolve fundamental questions on the existence of flammability limits when there are no external losses or buoyancy forces in the system, to understand the mechanism leading to cellular instability, and to study the effects of gravity on the transition to cellular structure. A flame in a microgravity environment can be extinguished without external losses, and the mechanism leading to cellular structure is not preferential diffusion but a thermo-diffusive instability. The simulations have also lead to a better understanding of the interactions between buoyancy forces and the processes leading to thermo-diffusive instability.

  3. Dynamic UNITY

    DTIC Science & Technology

    2002-01-01

    UNITY program that implements exactly the same algorithm as Specification 1.1. The correctness of this program is proven in amanner sim- 4 program...chapter, we introduce the Dynamic UNITY formalism, which allows us to reason about algorithms and protocols in which the sets of participating processes...implements Euclid’s algorithm for calculating the greatest common divisor (GCD) of two integers; it repeat- edly reads an integer message from each of its

  4. Workflow of the Grover algorithm simulation incorporating CUDA and GPGPU

    NASA Astrophysics Data System (ADS)

    Lu, Xiangwen; Yuan, Jiabin; Zhang, Weiwei

    2013-09-01

    The Grover quantum search algorithm, one of only a few representative quantum algorithms, can speed up many classical algorithms that use search heuristics. No true quantum computer has yet been developed. For the present, simulation is one effective means of verifying the search algorithm. In this work, we focus on the simulation workflow using a compute unified device architecture (CUDA). Two simulation workflow schemes are proposed. These schemes combine the characteristics of the Grover algorithm and the parallelism of general-purpose computing on graphics processing units (GPGPU). We also analyzed the optimization of memory space and memory access from this perspective. We implemented four programs on CUDA to evaluate the performance of schemes and optimization. Through experimentation, we analyzed the organization of threads suited to Grover algorithm simulations, compared the storage costs of the four programs, and validated the effectiveness of optimization. Experimental results also showed that the distinguished program on CUDA outperformed the serial program of libquantum on a CPU with a speedup of up to 23 times (12 times on average), depending on the scale of the simulation.

  5. Epstein-Barr virus growth/latency III program alters cellular microRNA expression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cameron, Jennifer E.; Tulane Cancer Center, Tulane University Health Sciences Center, 1430 Tulane Avenue, SL79, New Orleans, LA 70112; Fewell, Claire

    The Epstein-Barr virus (EBV) is associated with lymphoid and epithelial cancers. Initial EBV infection alters lymphocyte gene expression, inducing cellular proliferation and differentiation as the virus transitions through consecutive latency transcription programs. Cellular microRNAs (miRNAs) are important regulators of signaling pathways and are implicated in carcinogenesis. The extent to which EBV exploits cellular miRNAs is unknown. Using micro-array analysis and quantitative PCR, we demonstrate differential expression of cellular miRNAs in type III versus type I EBV latency including elevated expression of miR-21, miR-23a, miR-24, miR-27a, miR-34a, miR-146a and b, and miR-155. In contrast, miR-28 expression was found to be lowermore » in type III latency. The EBV-mediated regulation of cellular miRNAs may contribute to EBV signaling and associated cancers.« less

  6. Algorithm and program for information processing with the filin apparatus

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.

    1979-01-01

    The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.

  7. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching

    PubMed Central

    Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Zhang, Peng

    2017-01-01

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. PMID:28885547

  8. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.

    PubMed

    Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng

    2017-09-08

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.

  9. Heterogeneity of Metazoan Cells and Beyond: To Integrative Analysis of Cellular Populations at Single-Cell Level.

    PubMed

    Barteneva, Natasha S; Vorobjev, Ivan A

    2018-01-01

    In this paper, we review some of the recent advances in cellular heterogeneity and single-cell analysis methods. In modern research of cellular heterogeneity, there are four major approaches: analysis of pooled samples, single-cell analysis, high-throughput single-cell analysis, and lately integrated analysis of cellular population at a single-cell level. Recently developed high-throughput single-cell genetic analysis methods such as RNA-Seq require purification step and destruction of an analyzed cell often are providing a snapshot of the investigated cell without spatiotemporal context. Correlative analysis of multiparameter morphological, functional, and molecular information is important for differentiation of more uniform groups in the spectrum of different cell types. Simplified distributions (histograms and 2D plots) can underrepresent biologically significant subpopulations. Future directions may include the development of nondestructive methods for dissecting molecular events in intact cells, simultaneous correlative cellular analysis of phenotypic and molecular features by hybrid technologies such as imaging flow cytometry, and further progress in supervised and non-supervised statistical analysis algorithms.

  10. A short note on dynamic programming in a band.

    PubMed

    Gibrat, Jean-François

    2018-06-15

    Third generation sequencing technologies generate long reads that exhibit high error rates, in particular for insertions and deletions which are usually the most difficult errors to cope with. The only exact algorithm capable of aligning sequences with insertions and deletions is a dynamic programming algorithm. In this note, for the sake of efficiency, we consider dynamic programming in a band. We show how to choose the band width in function of the long reads' error rates, thus obtaining an [Formula: see text] algorithm in space and time. We also propose a procedure to decide whether this algorithm, when applied to semi-global alignments, provides the optimal score. We suggest that dynamic programming in a band is well suited to the problem of aligning long reads between themselves and can be used as a core component of methods for obtaining a consensus sequence from the long reads alone. The function implementing the dynamic programming algorithm in a band is available, as a standalone program, at: https://forgemia.inra.fr/jean-francois.gibrat/BAND_DYN_PROG.git.

  11. Configuring Airspace Sectors with Approximate Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  12. A comparison of common programming languages used in bioinformatics.

    PubMed

    Fourment, Mathieu; Gillings, Michael R

    2008-02-05

    The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.

  13. A comparison of common programming languages used in bioinformatics

    PubMed Central

    Fourment, Mathieu; Gillings, Michael R

    2008-01-01

    Background The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Results Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from Conclusion This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language. PMID:18251993

  14. An O({radical}nL) primal-dual affine scaling algorithm for linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Siming

    1994-12-31

    We present a new primal-dual affine scaling algorithm for linear programming. The search direction of the algorithm is a combination of classical affine scaling direction of Dikin and a recent new affine scaling direction of Jansen, Roos and Terlaky. The algorithm has an iteration complexity of O({radical}nL), comparing to O(nL) complexity of Jansen, Roos and Terlaky.

  15. Three-dimensional super-resolved live cell imaging through polarized multi-angle TIRF.

    PubMed

    Zheng, Cheng; Zhao, Guangyuan; Liu, Wenjie; Chen, Youhua; Zhang, Zhimin; Jin, Luhong; Xu, Yingke; Kuang, Cuifang; Liu, Xu

    2018-04-01

    Measuring three-dimensional nanoscale cellular structures is challenging, especially when the structure is dynamic. Owing to the informative total internal reflection fluorescence (TIRF) imaging under varied illumination angles, multi-angle (MA) TIRF has been examined to offer a nanoscale axial and a subsecond temporal resolution. However, conventional MA-TIRF still performs badly in lateral resolution and fails to characterize the depth image in densely distributed regions. Here, we emphasize the lateral super-resolution in the MA-TIRF, exampled by simply introducing polarization modulation into the illumination procedure. Equipped with a sparsity and accelerated proximal algorithm, we examine a more precise 3D sample structure compared with previous methods, enabling live cell imaging with a temporal resolution of 2 s and recovering high-resolution mitochondria fission and fusion processes. We also shared the recovery program, which is the first open-source recovery code for MA-TIRF, to the best of our knowledge.

  16. Vectorization with SIMD extensions speeds up reconstruction in electron tomography.

    PubMed

    Agulleiro, J I; Garzón, E M; García, I; Fernández, J J

    2010-06-01

    Electron tomography allows structural studies of cellular structures at molecular detail. Large 3D reconstructions are needed to meet the resolution requirements. The processing time to compute these large volumes may be considerable and so, high performance computing techniques have been used traditionally. This work presents a vector approach to tomographic reconstruction that relies on the exploitation of the SIMD extensions available in modern processors in combination to other single processor optimization techniques. This approach succeeds in producing full resolution tomograms with an important reduction in processing time, as evaluated with the most common reconstruction algorithms, namely WBP and SIRT. The main advantage stems from the fact that this approach is to be run on standard computers without the need of specialized hardware, which facilitates the development, use and management of programs. Future trends in processor design open excellent opportunities for vector processing with processor's SIMD extensions in the field of 3D electron microscopy.

  17. Programming Deep Brain Stimulation for Parkinson's Disease: The Toronto Western Hospital Algorithms.

    PubMed

    Picillo, Marina; Lozano, Andres M; Kou, Nancy; Puppi Munhoz, Renato; Fasano, Alfonso

    2016-01-01

    Deep brain stimulation (DBS) is an established and effective treatment for Parkinson's disease (PD). After surgery, a number of extensive programming sessions are performed to define the most optimal stimulation parameters. Programming sessions mainly rely only on neurologist's experience. As a result, patients often undergo inconsistent and inefficient stimulation changes, as well as unnecessary visits. We reviewed the literature on initial and follow-up DBS programming procedures and integrated our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We propose four algorithms including the initial programming and specific algorithms tailored to symptoms experienced by patients following DBS: speech disturbances, stimulation-induced dyskinesia and gait impairment. We conducted a literature search of PubMed from inception to July 2014 with the keywords "deep brain stimulation", "festination", "freezing", "initial programming", "Parkinson's disease", "postural instability", "speech disturbances", and "stimulation induced dyskinesia". Seventy papers were considered for this review. Based on the literature review and our experience at TWH, we refined four algorithms for: (1) the initial programming stage, and management of symptoms following DBS, particularly addressing (2) speech disturbances, (3) stimulation-induced dyskinesia, and (4) gait impairment. We propose four algorithms tailored to an individualized approach to managing symptoms associated with DBS and disease progression in patients with PD. We encourage established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Physics Computing '92: Proceedings of the 4th International Conference

    NASA Astrophysics Data System (ADS)

    de Groot, Robert A.; Nadrchal, Jaroslav

    1993-04-01

    The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants

  19. One cutting plane algorithm using auxiliary functions

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Kazaeva, K. E.

    2016-11-01

    We propose an algorithm for solving a convex programming problem from the class of cutting methods. The algorithm is characterized by the construction of approximations using some auxiliary functions, instead of the objective function. Each auxiliary function bases on the exterior penalty function. In proposed algorithm the admissible set and the epigraph of each auxiliary function are embedded into polyhedral sets. In connection with the above, the iteration points are found by solving linear programming problems. We discuss the implementation of the algorithm and prove its convergence.

  20. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  1. Algorithm Improvement Program Nuclide Identification Algorithm Scoring Criteria And Scoring Application - DNDO.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enghauser, Michael

    2015-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  2. Using Small-Step Refinement for Algorithm Verification in Computer Science Education

    ERIC Educational Resources Information Center

    Simic, Danijela

    2015-01-01

    Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…

  3. Evolution of Cellular Automata toward a LIFE-Like Rule Guided by 1/ƒ Noise

    NASA Astrophysics Data System (ADS)

    Ninagawa, Shigeru

    There is evidence in favor of a relationship between the presence of 1/ƒ noise and computational universality in cellular automata. To confirm the relationship, we search for two-dimensional cellular automata with a 1/ƒ power spectrum by means of genetic algorithms. The power spectrum is calculated from the evolution of the state of the cell, starting from a random initial configuration. The fitness is estimated by the power spectrum with consideration of the spectral similarity to the 1/ƒ spectrum. The result shows that the rule with the highest fitness over the most runs exhibits a 1/ƒ type spectrum and its transition function and behavior are quite similar to those of the Game of Life, which is known to be a computationally universal cellular automaton. These results support the relationship between the presence of 1/ƒ noise and computational universality.

  4. Real-time stereo matching using orthogonal reliability-based dynamic programming.

    PubMed

    Gong, Minglun; Yang, Yee-Hong

    2007-03-01

    A novel algorithm is presented in this paper for estimating reliable stereo matches in real time. Based on the dynamic programming-based technique we previously proposed, the new algorithm can generate semi-dense disparity maps using as few as two dynamic programming passes. The iterative best path tracing process used in traditional dynamic programming is replaced by a local minimum searching process, making the algorithm suitable for parallel execution. Most computations are implemented on programmable graphics hardware, which improves the processing speed and makes real-time estimation possible. The experiments on the four new Middlebury stereo datasets show that, on an ATI Radeon X800 card, the presented algorithm can produce reliable matches for 60% approximately 80% of pixels at the rate of 10 approximately 20 frames per second. If needed, the algorithm can be configured for generating full density disparity maps.

  5. A computerized compensator design algorithm with launch vehicle applications

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Mcdaniel, W. L., Jr.

    1976-01-01

    This short paper presents a computerized algorithm for the design of compensators for large launch vehicles. The algorithm is applicable to the design of compensators for linear, time-invariant, control systems with a plant possessing a single control input and multioutputs. The achievement of frequency response specifications is cast into a strict constraint mathematical programming format. An improved solution algorithm for solving this type of problem is given, along with the mathematical necessities for application to systems of the above type. A computer program, compensator improvement program (CIP), has been developed and applied to a pragmatic space-industry-related example.

  6. Universal algorithms and programs for calculating the motion parameters in the two-body problem

    NASA Technical Reports Server (NTRS)

    Bakhshiyan, B. T.; Sukhanov, A. A.

    1979-01-01

    The algorithms and FORTRAN programs for computing positions and velocities, orbital elements and first and second partial derivatives in the two-body problem are presented. The algorithms are applicable for any value of eccentricity and are convenient for computing various navigation parameters.

  7. REACTT: an algorithm for solving spatial equilibrium problems.

    Treesearch

    D.J. Brooks; J. Kincaid

    1987-01-01

    The problem of determining equilibrium prices and quantities in spatially separated markets is reviewed. Algorithms that compute spatial equilibria are discussed. A computer program using the reactive programming algorithm for solving spatial equilibrium problems that involve multiple commodities is presented, along with detailed documentation. A sample data set,...

  8. Algorithm for constructing the programmed motion of a bounding vehicle for the flight phase

    NASA Technical Reports Server (NTRS)

    Lapshin, V. V.

    1979-01-01

    The construction of the programmed motion of a multileg bounding vehicle in the flight was studied. An algorithm is given for solving the boundary value problem for constructing this programmed motion. If the motion is shown to be symmetrical, a simplified use of the algorithm can be applied. A method is proposed for nonimpact of the legs during lift-off of the vehicle, and for softness at touchdown. Tables are utilized to construct this programmed motion over a broad set of standard motion conditions.

  9. HPEPDOCK: a web server for blind peptide-protein docking based on a hierarchical algorithm.

    PubMed

    Zhou, Pei; Jin, Bowen; Li, Hao; Huang, Sheng-You

    2018-05-09

    Protein-peptide interactions are crucial in many cellular functions. Therefore, determining the structure of protein-peptide complexes is important for understanding the molecular mechanism of related biological processes and developing peptide drugs. HPEPDOCK is a novel web server for blind protein-peptide docking through a hierarchical algorithm. Instead of running lengthy simulations to refine peptide conformations, HPEPDOCK considers the peptide flexibility through an ensemble of peptide conformations generated by our MODPEP program. For blind global peptide docking, HPEPDOCK obtained a success rate of 33.3% in binding mode prediction on a benchmark of 57 unbound cases when the top 10 models were considered, compared to 21.1% for pepATTRACT server. HPEPDOCK also performed well in docking against homology models and obtained a success rate of 29.8% within top 10 predictions. For local peptide docking, HPEPDOCK achieved a high success rate of 72.6% on a benchmark of 62 unbound cases within top 10 predictions, compared to 45.2% for HADDOCK peptide protocol. Our HPEPDOCK server is computationally efficient and consumed an average of 29.8 mins for a global peptide docking job and 14.2 mins for a local peptide docking job. The HPEPDOCK web server is available at http://huanglab.phys.hust.edu.cn/hpepdock/.

  10. An Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 Orbiter and Lander

    NASA Technical Reports Server (NTRS)

    Striepe, Scott A.; Queen, Eric M.; Powell, Richard W.; Braun, Robert D.; Cheatwood, F. McNeil; Aguirre, John T.; Sachi, Laura A.; Lyons, Daniel T.

    1998-01-01

    An Atmospheric Flight Team was formed by the Mars Surveyor Program '01 mission office to develop aerocapture and precision landing testbed simulations and candidate guidance algorithms. Three- and six-degree-of-freedom Mars atmospheric flight simulations have been developed for testing, evaluation, and analysis of candidate guidance algorithms for the Mars Surveyor Program 2001 Orbiter and Lander. These simulations are built around the Program to Optimize Simulated Trajectories. Subroutines were supplied by Atmospheric Flight Team members for modeling the Mars atmosphere, spacecraft control system, aeroshell aerodynamic characteristics, and other Mars 2001 mission specific models. This paper describes these models and their perturbations applied during Monte Carlo analyses to develop, test, and characterize candidate guidance algorithms.

  11. Multiple object tracking using the shortest path faster association algorithm.

    PubMed

    Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin

    2014-01-01

    To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  12. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    PubMed Central

    Liu, Heping; Liu, Huaping; Yang, Bin

    2014-01-01

    To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322

  13. Exact and heuristic algorithms for Space Information Flow.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng

    2018-01-01

    Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.

  14. Incorporating Molecular and Cellular Biology into a Chemical Engineering Degree Program

    ERIC Educational Resources Information Center

    O'Connor, Kim C.

    2005-01-01

    There is a growing need for a workforce that can apply engineering principles to molecular based discovery and product development in the biological sciences. To this end, Tulane University established a degree program that incorporates molecular and cellular biology into the chemical engineering curriculum. In celebration of the tenth anniversary…

  15. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 3: The GREEDY algorithm

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The functional specifications, functional design and flow, and the program logic of the GREEDY computer program are described. The GREEDY program is a submodule of the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) program and has been designed as a continuation of the shuttle Mission Payloads (MPLS) program. The MPLS uses input payload data to form a set of feasible payload combinations; from these, GREEDY selects a subset of combinations (a traffic model) so all payloads can be included without redundancy. The program also provides the user a tutorial option so that he can choose an alternate traffic model in case a particular traffic model is unacceptable.

  16. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  17. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  18. Clustering single cells: a review of approaches on high-and low-depth single-cell RNA-seq data.

    PubMed

    Menon, Vilas

    2017-12-11

    Advances in single-cell RNA-sequencing technology have resulted in a wealth of studies aiming to identify transcriptomic cell types in various biological systems. There are multiple experimental approaches to isolate and profile single cells, which provide different levels of cellular and tissue coverage. In addition, multiple computational strategies have been proposed to identify putative cell types from single-cell data. From a data generation perspective, recent single-cell studies can be classified into two groups: those that distribute reads shallowly over large numbers of cells and those that distribute reads more deeply over a smaller cell population. Although there are advantages to both approaches in terms of cellular and tissue coverage, it is unclear whether different computational cell type identification methods are better suited to one or the other experimental paradigm. This study reviews three cell type clustering algorithms, each representing one of three broad approaches, and finds that PCA-based algorithms appear most suited to low read depth data sets, whereas gene clustering-based and biclustering algorithms perform better on high read depth data sets. In addition, highly related cell classes are better distinguished by higher-depth data, given the same total number of reads; however, simultaneous discovery of distinct and similar types is better served by lower-depth, higher cell number data. Overall, this study suggests that the depth of profiling should be determined by initial assumptions about the diversity of cells in the population, and that the selection of clustering algorithm(s) is subsequently based on the depth of profiling will allow for better identification of putative transcriptomic cell types. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  19. RACER: Effective Race Detection Using AspectJ

    NASA Technical Reports Server (NTRS)

    Bodden, Eric; Havelund, Klaus

    2008-01-01

    The limits of coding with joint constraints on detected and undetected error rates Programming errors occur frequently in large software systems, and even more so if these systems are concurrent. In the past, researchers have developed specialized programs to aid programmers detecting concurrent programming errors such as deadlocks, livelocks, starvation and data races. In this work we propose a language extension to the aspect-oriented programming language AspectJ, in the form of three new built-in pointcuts, lock(), unlock() and may be Shared(), which allow programmers to monitor program events where locks are granted or handed back, and where values are accessed that may be shared amongst multiple Java threads. We decide thread-locality using a static thread-local objects analysis developed by others. Using the three new primitive pointcuts, researchers can directly implement efficient monitoring algorithms to detect concurrent programming errors online. As an example, we expose a new algorithm which we call RACER, an adoption of the well-known ERASER algorithm to the memory model of Java. We implemented the new pointcuts as an extension to the Aspect Bench Compiler, implemented the RACER algorithm using this language extension and then applied the algorithm to the NASA K9 Rover Executive. Our experiments proved our implementation very effective. In the Rover Executive RACER finds 70 data races. Only one of these races was previously known.We further applied the algorithm to two other multi-threaded programs written by Computer Science researchers, in which we found races as well.

  20. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  1. Testing Algorithmic Skills in Traditional and Non-Traditional Programming Environments

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Máth, János; Abari, Kálmán

    2015-01-01

    The Testing Algorithmic and Application Skills (TAaAS) project was launched in the 2011/2012 academic year to test first year students of Informatics, focusing on their algorithmic skills in traditional and non-traditional programming environments, and on the transference of their knowledge of Informatics from secondary to tertiary education. The…

  2. Two algorithms for neural-network design and training with application to channel equalization.

    PubMed

    Sweatman, C Z; Mulgrew, B; Gibson, G J

    1998-01-01

    We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.

  3. Determination of the Underlying Task Scheduling Algorithm for an Ada Runtime System

    DTIC Science & Technology

    1989-12-01

    was also curious as to how well I could model the test cases with Ada programs . In particular, I wanted to see whether I could model the equal arrival...parameter relationshis=s required to detect the execution of individual algorithms. These test cases were modeled using Ada programs . Then, the...results were analyzed to determine whether the Ada programs were capable of revealing the task scheduling algorithm used by the Ada run-time system. This

  4. G STL: the geostatistical template library in C++

    NASA Astrophysics Data System (ADS)

    Remy, Nicolas; Shtuka, Arben; Levy, Bruno; Caers, Jef

    2002-10-01

    The development of geostatistics has been mostly accomplished by application-oriented engineers in the past 20 years. The focus on concrete applications gave birth to many algorithms and computer programs designed to address different issues, such as estimating or simulating a variable while possibly accounting for secondary information such as seismic data, or integrating geological and geometrical data. At the core of any geostatistical data integration methodology is a well-designed algorithm. Yet, despite their obvious differences, all these algorithms share many commonalities on which to build a geostatistics programming library, lest the resulting library is poorly reusable and difficult to expand. Building on this observation, we design a comprehensive, yet flexible and easily reusable library of geostatistics algorithms in C++. The recent advent of the generic programming paradigm allows us elegantly to express the commonalities of the geostatistical algorithms into computer code. Generic programming, also referred to as "programming with concepts", provides a high level of abstraction without loss of efficiency. This last point is a major gain over object-oriented programming which often trades efficiency for abstraction. It is not enough for a numerical library to be reusable, it also has to be fast. Because generic programming is "programming with concepts", the essential step in the library design is the careful identification and thorough definition of these concepts shared by most geostatistical algorithms. Building on these definitions, a generic and expandable code can be developed. To show the advantages of such a generic library, we use G STL to build two sequential simulation programs working on two different types of grids—a surface with faults and an unstructured grid—without requiring any change to the G STL code.

  5. A hybrid localization technique for patient tracking.

    PubMed

    Rodionov, Denis; Kolev, George; Bushminkin, Kirill

    2013-01-01

    Nowadays numerous technologies are employed for tracking patients and assets in hospitals or nursing homes. Each of them has advantages and drawbacks. For example, WiFi localization has relatively good accuracy but cannot be used in case of power outage or in the areas with poor WiFi coverage. Magnetometer positioning or cellular network does not have such problems but they are not as accurate as localization with WiFi. This paper describes technique that simultaneously employs different localization technologies for enhancing stability and average accuracy of localization. The proposed algorithm is based on fingerprinting method paired with data fusion and prediction algorithms for estimating the object location. The core idea of the algorithm is technology fusion using error estimation methods. For testing accuracy and performance of the algorithm testing simulation environment has been implemented. Significant accuracy improvement was showed in practical scenarios.

  6. A space-efficient algorithm for local similarities.

    PubMed

    Huang, X Q; Hardison, R C; Miller, W

    1990-10-01

    Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.

  7. Virtual target tracking (VTT) as applied to mobile satellite communication networks

    NASA Astrophysics Data System (ADS)

    Amoozegar, Farid

    1999-08-01

    Traditionally, target tracking has been used for aerospace applications, such as, tracking highly maneuvering targets in a cluttered environment for missile-to-target intercept scenarios. Although the speed and maneuvering capability of current aerospace targets demand more efficient algorithms, many complex techniques have already been proposed in the literature, which primarily cover the defense applications of tracking methods. On the other hand, the rapid growth of Global Communication Systems, Global Information Systems (GIS), and Global Positioning Systems (GPS) is creating new and more diverse challenges for multi-target tracking applications. Mobile communication and computing can very well appreciate a huge market for Cellular Communication and Tracking Devices (CCTD), which will be tracking networked devices at the cellular level. The objective of this paper is to introduce a new concept, i.e., Virtual Target Tracking (VTT) for commercial applications of multi-target tracking algorithms and techniques as applied to mobile satellite communication networks. It would be discussed how Virtual Target Tracking would bring more diversity to target tracking research.

  8. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    PubMed

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-10-01

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  9. Semiautomated hybrid algorithm for estimation of three-dimensional liver surface in CT using dynamic cellular automata and level-sets

    PubMed Central

    Dakua, Sarada Prasad; Abinahed, Julien; Al-Ansari, Abdulla

    2015-01-01

    Abstract. Liver segmentation continues to remain a major challenge, largely due to its intense complexity with surrounding anatomical structures (stomach, kidney, and heart), high noise level and lack of contrast in pathological computed tomography (CT) data. We present an approach to reconstructing the liver surface in low contrast CT. The main contributions are: (1) a stochastic resonance-based methodology in discrete cosine transform domain is developed to enhance the contrast of pathological liver images, (2) a new formulation is proposed to prevent the object boundary, resulting from the cellular automata method, from leaking into the surrounding areas of similar intensity, and (3) a level-set method is suggested to generate intermediate segmentation contours from two segmented slices distantly located in a subject sequence. We have tested the algorithm on real datasets obtained from two sources, Hamad General Hospital and medical image computing and computer-assisted interventions grand challenge workshop. Various parameters in the algorithm, such as w, Δt, z, α, μ, α1, and α2, play imperative roles, thus their values are precisely selected. Both qualitative and quantitative evaluation performed on liver data show promising segmentation accuracy when compared with ground truth data reflecting the potential of the proposed method. PMID:26158101

  10. A generalized global alignment algorithm.

    PubMed

    Huang, Xiaoqiu; Chao, Kun-Mao

    2003-01-22

    Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.

  11. NPLOT: an Interactive Plotting Program for NASTRAN Finite Element Models

    NASA Technical Reports Server (NTRS)

    Jones, G. K.; Mcentire, K. J.

    1985-01-01

    The NPLOT (NASTRAN Plot) is an interactive computer graphics program for plotting undeformed and deformed NASTRAN finite element models. Developed at NASA's Goddard Space Flight Center, the program provides flexible element selection and grid point, ASET and SPC degree of freedom labelling. It is easy to use and provides a combination menu and command driven user interface. NPLOT also provides very fast hidden line and haloed line algorithms. The hidden line algorithm in NPLOT proved to be both very accurate and several times faster than other existing hidden line algorithms. A fast spatial bucket sort and horizon edge computation are used to achieve this high level of performance. The hidden line and the haloed line algorithms are the primary features that make NPLOT different from other plotting programs.

  12. On program restructuring, scheduling, and communication for parallel processor systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polychronopoulos, Constantine D.

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less

  13. Simulation of land use change in the three gorges reservoir area based on CART-CA

    NASA Astrophysics Data System (ADS)

    Yuan, Min

    2018-05-01

    This study proposes a new method to simulate spatiotemporal complex multiple land uses by using classification and regression tree algorithm (CART) based CA model. In this model, we use classification and regression tree algorithm to calculate land class conversion probability, and combine neighborhood factor, random factor to extract cellular transformation rules. The overall Kappa coefficient is 0.8014 and the overall accuracy is 0.8821 in the land dynamic simulation results of the three gorges reservoir area from 2000 to 2010, and the simulation results are satisfactory.

  14. Crowd evacuation model based on bacterial foraging algorithm

    NASA Astrophysics Data System (ADS)

    Shibiao, Mu; Zhijun, Chen

    To understand crowd evacuation, a model based on a bacterial foraging algorithm (BFA) is proposed in this paper. Considering dynamic and static factors, the probability of pedestrian movement is established using cellular automata. In addition, given walking and queue times, a target optimization function is built. At the same time, a BFA is used to optimize the objective function. Finally, through real and simulation experiments, the relationship between the parameters of evacuation time, exit width, pedestrian density, and average evacuation speed is analyzed. The results show that the model can effectively describe a real evacuation.

  15. Negative Difference Resistance and Its Application to Construct Boolean Logic Circuits

    NASA Astrophysics Data System (ADS)

    Nikodem, Maciej; Bawiec, Marek A.; Surmacz, Tomasz R.

    Electronic circuits based on nanodevices and quantum effect are the future of logic circuits design. Today's technology allows constructing resonant tunneling diodes, quantum cellular automata and nanowires/nanoribbons that are the elementary components of threshold gates. However, synthesizing a threshold circuit for an arbitrary logic function is still a challenging task where no efficient algorithms exist. This paper focuses on Generalised Threshold Gates (GTG), giving the overview of threshold circuit synthesis methods and presenting an algorithm that considerably simplifies the task in case of GTG circuits.

  16. Digital image analysis agrees with visual estimates of adult bone marrow trephine biopsy cellularity.

    PubMed

    Hagiya, A S; Etman, A; Siddiqi, I N; Cen, S; Matcuk, G R; Brynes, R K; Salama, M E

    2018-04-01

    Evaluation of cellularity is an essential component of bone marrow trephine biopsy examination. The standard practice is to report the results as visual estimates (VE). Digital image analysis (DIA) offers the promise of more objective measurements of cellularity. Adult bone marrow trephine biopsy sections were assessed for cellularity by VE. Sections were scanned using an Aperio AT2 Scanscope and analyzed using a Cytonuclear (version 1.4) algorithm on halo software. Intraclass correlation (ICC) was used to assess relatedness between VE and DIA, and between MRI and DIA for a separate subset of patients. Trephine biopsy sections from a subset of patients with bone marrow biopsies uninvolved by malignancy were assessed for age-related changes. Interobserver VE agreement was good to excellent. The ICC value was 0.81 for VE and DIA, and 0.50 for MRI and DIA. Linearity studies showed no statistically significant trend for age-related changes in cellularity in our cohort (r = -.29, P = .06). Agreement was good between VE and DIA. It may be possible to use DIA or VE to measure cellularity in the appropriate clinical scenario. The limited sample size precludes similar determinations for MRI calculations. Further studies examining healthy donors are necessary before making definitive conclusions regarding age and cellularity. © 2017 John Wiley & Sons Ltd.

  17. Cellular automata with object-oriented features for parallel molecular network modeling.

    PubMed

    Zhu, Hao; Wu, Yinghui; Huang, Sui; Sun, Yan; Dhar, Pawan

    2005-06-01

    Cellular automata are an important modeling paradigm for studying the dynamics of large, parallel systems composed of multiple, interacting components. However, to model biological systems, cellular automata need to be extended beyond the large-scale parallelism and intensive communication in order to capture two fundamental properties characteristic of complex biological systems: hierarchy and heterogeneity. This paper proposes extensions to a cellular automata language, Cellang, to meet this purpose. The extended language, with object-oriented features, can be used to describe the structure and activity of parallel molecular networks within cells. Capabilities of this new programming language include object structure to define molecular programs within a cell, floating-point data type and mathematical functions to perform quantitative computation, message passing capability to describe molecular interactions, as well as new operators, statements, and built-in functions. We discuss relevant programming issues of these features, including the object-oriented description of molecular interactions with molecule encapsulation, message passing, and the description of heterogeneity and anisotropy at the cell and molecule levels. By enabling the integration of modeling at the molecular level with system behavior at cell, tissue, organ, or even organism levels, the program will help improve our understanding of how complex and dynamic biological activities are generated and controlled by parallel functioning of molecular networks. Index Terms-Cellular automata, modeling, molecular network, object-oriented.

  18. Fast Fourier Transform algorithm design and tradeoffs

    NASA Technical Reports Server (NTRS)

    Kamin, Ray A., III; Adams, George B., III

    1988-01-01

    The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.

  19. A Partitioning and Bounded Variable Algorithm for Linear Programming

    ERIC Educational Resources Information Center

    Sheskin, Theodore J.

    2006-01-01

    An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…

  20. User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.

    1988-01-01

    Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.

  1. Algorithm Building and Learning Programming Languages Using a New Educational Paradigm

    NASA Astrophysics Data System (ADS)

    Jain, Anshul K.; Singhal, Manik; Gupta, Manu Sheel

    2011-08-01

    This research paper presents a new concept of using a single tool to associate syntax of various programming languages, algorithms and basic coding techniques. A simple framework has been programmed in Python that helps students learn skills to develop algorithms, and implement them in various programming languages. The tool provides an innovative and a unified graphical user interface for development of multimedia objects, educational games and applications. It also aids collaborative learning amongst students and teachers through an integrated mechanism based on Remote Procedure Calls. The paper also elucidates an innovative method for code generation to enable students to learn the basics of programming languages using drag-n-drop methods for image objects.

  2. Automated and Adaptable Quantification of Cellular Alignment from Microscopic Images for Tissue Engineering Applications

    PubMed Central

    Xu, Feng; Beyazoglu, Turker; Hefner, Evan; Gurkan, Umut Atakan

    2011-01-01

    Cellular alignment plays a critical role in functional, physical, and biological characteristics of many tissue types, such as muscle, tendon, nerve, and cornea. Current efforts toward regeneration of these tissues include replicating the cellular microenvironment by developing biomaterials that facilitate cellular alignment. To assess the functional effectiveness of the engineered microenvironments, one essential criterion is quantification of cellular alignment. Therefore, there is a need for rapid, accurate, and adaptable methodologies to quantify cellular alignment for tissue engineering applications. To address this need, we developed an automated method, binarization-based extraction of alignment score (BEAS), to determine cell orientation distribution in a wide variety of microscopic images. This method combines a sequenced application of median and band-pass filters, locally adaptive thresholding approaches and image processing techniques. Cellular alignment score is obtained by applying a robust scoring algorithm to the orientation distribution. We validated the BEAS method by comparing the results with the existing approaches reported in literature (i.e., manual, radial fast Fourier transform-radial sum, and gradient based approaches). Validation results indicated that the BEAS method resulted in statistically comparable alignment scores with the manual method (coefficient of determination R2=0.92). Therefore, the BEAS method introduced in this study could enable accurate, convenient, and adaptable evaluation of engineered tissue constructs and biomaterials in terms of cellular alignment and organization. PMID:21370940

  3. Development and validation of an online interactive, multimedia wound care algorithms program.

    PubMed

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide patient care.

  4. Dynamic Simulation of 1D Cellular Automata in the Active aTAM.

    PubMed

    Jonoska, Nataša; Karpenko, Daria; Seki, Shinnosuke

    2015-07-01

    The Active aTAM is a tile based model for self-assembly where tiles are able to transfer signals and change identities according to the signals received. We extend Active aTAM to include deactivation signals and thereby allow detachment of tiles. We show that the model allows a dynamic simulation of cellular automata with assemblies that do not record the entire computational history but only the current updates of the states, and thus provide a way for (a) algorithmic dynamical structural changes in the assembly and (b) reusable space in self-assembly. The simulation is such that at a given location the sequence of tiles that attach and detach corresponds precisely to the sequence of states the synchronous cellular automaton generates at that location.

  5. Dynamic Simulation of 1D Cellular Automata in the Active aTAM

    PubMed Central

    Jonoska, Nataša; Karpenko, Daria; Seki, Shinnosuke

    2016-01-01

    The Active aTAM is a tile based model for self-assembly where tiles are able to transfer signals and change identities according to the signals received. We extend Active aTAM to include deactivation signals and thereby allow detachment of tiles. We show that the model allows a dynamic simulation of cellular automata with assemblies that do not record the entire computational history but only the current updates of the states, and thus provide a way for (a) algorithmic dynamical structural changes in the assembly and (b) reusable space in self-assembly. The simulation is such that at a given location the sequence of tiles that attach and detach corresponds precisely to the sequence of states the synchronous cellular automaton generates at that location. PMID:27789918

  6. Magnetic resonance imaging diffusion tensor tractography: evaluation of anatomic accuracy of different fiber tracking software packages.

    PubMed

    Feigl, Guenther C; Hiergeist, Wolfgang; Fellner, Claudia; Schebesch, Karl-Michael M; Doenitz, Christian; Finkenzeller, Thomas; Brawanski, Alexander; Schlaier, Juergen

    2014-01-01

    Diffusion tensor imaging (DTI)-based tractography has become an integral part of preoperative diagnostic imaging in many neurosurgical centers, and other nonsurgical specialties depend increasingly on DTI tractography as a diagnostic tool. The aim of this study was to analyze the anatomic accuracy of visualized white matter fiber pathways using different, readily available DTI tractography software programs. Magnetic resonance imaging scans of the head of 20 healthy volunteers were acquired using a Siemens Symphony TIM 1.5T scanner and a 12-channel head array coil. The standard settings of the scans in this study were 12 diffusion directions and 5-mm slices. The fornices were chosen as an anatomic structure for the comparative fiber tracking. Identical data sets were loaded into nine different fiber tracking packages that used different algorithms. The nine software packages and algorithms used were NeuroQLab (modified tensor deflection [TEND] algorithm), Sörensen DTI task card (modified streamline tracking technique algorithm), Siemens DTI module (modified fourth-order Runge-Kutta algorithm), six different software packages from Trackvis (interpolated streamline algorithm, modified FACT algorithm, second-order Runge-Kutta algorithm, Q-ball [FACT algorithm], tensorline algorithm, Q-ball [second-order Runge-Kutta algorithm]), DTI Query (modified streamline tracking technique algorithm), Medinria (modified TEND algorithm), Brainvoyager (modified TEND algorithm), DTI Studio modified FACT algorithm, and the BrainLab DTI module based on the modified Runge-Kutta algorithm. Three examiners (a neuroradiologist, a magnetic resonance imaging physicist, and a neurosurgeon) served as examiners. They were double-blinded with respect to the test subject and the fiber tracking software used in the presented images. Each examiner evaluated 301 images. The examiners were instructed to evaluate screenshots from the different programs based on two main criteria: (i) anatomic accuracy of the course of the displayed fibers and (ii) number of fibers displayed outside the anatomic boundaries. The mean overall grade for anatomic accuracy was 2.2 (range, 1.1-3.6) with a standard deviation (SD) of 0.9. The mean overall grade for incorrectly displayed fibers was 2.5 (range, 1.6-3.5) with a SD of 0.6. The mean grade of the overall program ranking was 2.3 with a SD of 0.6. The overall mean grade of the program ranked number one (NeuroQLab) was 1.7 (range, 1.5-2.8). The mean overall grade of the program ranked last (BrainLab iPlan Cranial 2.6 DTI Module) was 3.3 (range, 1.7-4). The difference between the mean grades of these two programs was statistically highly significant (P < 0.0001). There was no statistically significant difference between the programs ranked 1-3: NeuroQLab, Sörensen DTI Task Card, and Siemens DTI module. The results of this study show that there is a statistically significant difference in the anatomic accuracy of the tested DTI fiber tracking programs. Although incorrectly displayed fibers could lead to wrong conclusions in the neurosciences field, which relies heavily on this noninvasive imaging technique, incorrectly displayed fibers in neurosurgery could lead to surgical decisions potentially harmful for the patient if used without intraoperative cortical stimulation. DTI fiber tracking presents a valuable noninvasive preoperative imaging tool, which requires further validation after important standardization of the acquisition and processing techniques currently available. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. 77 FR 63840 - Cellular, Tissue and Gene Therapies Advisory Committee; Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-17

    ...] Cellular, Tissue and Gene Therapies Advisory Committee; Notice of Meeting AGENCY: Food and Drug... meeting will be closed to the public. Name of Committee: Cellular, Tissue and Gene Therapies Advisory... to hear updates of research programs in the Gene Transfer and Immunogenicity Branch, Office of...

  8. Programming Deep Brain Stimulation for Tremor and Dystonia: The Toronto Western Hospital Algorithms.

    PubMed

    Picillo, Marina; Lozano, Andres M; Kou, Nancy; Munhoz, Renato Puppi; Fasano, Alfonso

    2016-01-01

    Deep brain stimulation (DBS) is an effective treatment for essential tremor (ET) and dystonia. After surgery, a number of extensive programming sessions are performed, mainly relying on neurologist's personal experience as no programming guidelines have been provided so far, with the exception of recommendations provided by groups of experts. Finally, fewer information is available for the management of DBS in ET and dystonia compared with Parkinson's disease. Our aim is to review the literature on initial and follow-up DBS programming procedures for ET and dystonia and integrate the results with our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We conducted a literature search of PubMed from inception to July 2014 with the keywords "balance", "bradykinesia", "deep brain stimulation", "dysarthria", "dystonia", "gait disturbances", "initial programming", "loss of benefit", "micrographia", "speech", "speech difficulties" and "tremor". Seventy-six papers were considered for this review. Based on the literature review and our experience at TWH, we refined three algorithms for management of ET, including: (1) initial programming, (2) management of balance and speech issues and (3) loss of stimulation benefit. We also depicted algorithms for the management of dystonia, including: (1) initial programming and (2) management of stimulation-induced hypokinesia (shuffling gait, micrographia and speech impairment). We propose five algorithms tailored to an individualized approach to managing ET and dystonia patients with DBS. We encourage the application of these algorithms to supplement current standards of care in established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Competitive evaluation of failure detection algorithms for strapdown redundant inertial instruments

    NASA Technical Reports Server (NTRS)

    Wilcox, J. C.

    1973-01-01

    Algorithms for failure detection, isolation, and correction of redundant inertial instruments in the strapdown dodecahedron configuration are competitively evaluated in a digital computer simulation that subjects them to identical environments. Their performance is compared in terms of orientation and inertial velocity errors and in terms of missed and false alarms. The algorithms appear in the simulation program in modular form, so that they may be readily extracted for use elsewhere. The simulation program and its inputs and outputs are described. The algorithms, along with an eight algorithm that was not simulated, also compared analytically to show the relationships among them.

  10. Ascent guidance algorithm using lidar wind measurements

    NASA Technical Reports Server (NTRS)

    Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.

    1990-01-01

    The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.

  11. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    ERIC Educational Resources Information Center

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  12. Parallelization of Nullspace Algorithm for the computation of metabolic pathways

    PubMed Central

    Jevremović, Dimitrije; Trinh, Cong T.; Srienc, Friedrich; Sosa, Carlos P.; Boley, Daniel

    2011-01-01

    Elementary mode analysis is a useful metabolic pathway analysis tool in understanding and analyzing cellular metabolism, since elementary modes can represent metabolic pathways with unique and minimal sets of enzyme-catalyzed reactions of a metabolic network under steady state conditions. However, computation of the elementary modes of a genome- scale metabolic network with 100–1000 reactions is very expensive and sometimes not feasible with the commonly used serial Nullspace Algorithm. In this work, we develop a distributed memory parallelization of the Nullspace Algorithm to handle efficiently the computation of the elementary modes of a large metabolic network. We give an implementation in C++ language with the support of MPI library functions for the parallel communication. Our proposed algorithm is accompanied with an analysis of the complexity and identification of major bottlenecks during computation of all possible pathways of a large metabolic network. The algorithm includes methods to achieve load balancing among the compute-nodes and specific communication patterns to reduce the communication overhead and improve efficiency. PMID:22058581

  13. Balancing Uplink and Downlink under Asymmetric Traffic Environments Using Distributed Receive Antennas

    NASA Astrophysics Data System (ADS)

    Sohn, Illsoo; Lee, Byong Ok; Lee, Kwang Bok

    Recently, multimedia services are increasing with the widespread use of various wireless applications such as web browsers, real-time video, and interactive games, which results in traffic asymmetry between the uplink and downlink. Hence, time division duplex (TDD) systems which provide advantages in efficient bandwidth utilization under asymmetric traffic environments have become one of the most important issues in future mobile cellular systems. It is known that two types of intercell interference, referred to as crossed-slot interference, additionally arise in TDD systems; the performances of the uplink and downlink transmissions are degraded by BS-to-BS crossed-slot interference and MS-to-MS crossed-slot interference, respectively. The resulting performance unbalance between the uplink and downlink makes network deployment severely inefficient. Previous works have proposed intelligent time slot allocation algorithms to mitigate the crossed-slot interference problem. However, they require centralized control, which causes large signaling overhead in the network. In this paper, we propose to change the shape of the cellular structure itself. The conventional cellular structure is easily transformed into the proposed cellular structure with distributed receive antennas (DRAs). We set up statistical Markov chain traffic model and analyze the bit error performances of the conventional cellular structure and proposed cellular structure under asymmetric traffic environments. Numerical results show that the uplink and downlink performances of the proposed cellular structure become balanced with the proper number of DRAs and thus the proposed cellular structure is notably cost-effective in network deployment compared to the conventional cellular structure. As a result, extending the conventional cellular structure into the proposed cellular structure with DRAs is a remarkably cost-effective solution to support asymmetric traffic environments in future mobile cellular systems.

  14. Cellular Automata

    NASA Astrophysics Data System (ADS)

    Gutowitz, Howard

    1991-08-01

    Cellular automata, dynamic systems in which space and time are discrete, are yielding interesting applications in both the physical and natural sciences. The thirty four contributions in this book cover many aspects of contemporary studies on cellular automata and include reviews, research reports, and guides to recent literature and available software. Chapters cover mathematical analysis, the structure of the space of cellular automata, learning rules with specified properties: cellular automata in biology, physics, chemistry, and computation theory; and generalizations of cellular automata in neural nets, Boolean nets, and coupled map lattices. Current work on cellular automata may be viewed as revolving around two central and closely related problems: the forward problem and the inverse problem. The forward problem concerns the description of properties of given cellular automata. Properties considered include reversibility, invariants, criticality, fractal dimension, and computational power. The role of cellular automata in computation theory is seen as a particularly exciting venue for exploring parallel computers as theoretical and practical tools in mathematical physics. The inverse problem, an area of study gaining prominence particularly in the natural sciences, involves designing rules that possess specified properties or perform specified task. A long-term goal is to develop a set of techniques that can find a rule or set of rules that can reproduce quantitative observations of a physical system. Studies of the inverse problem take up the organization and structure of the set of automata, in particular the parameterization of the space of cellular automata. Optimization and learning techniques, like the genetic algorithm and adaptive stochastic cellular automata are applied to find cellular automaton rules that model such physical phenomena as crystal growth or perform such adaptive-learning tasks as balancing an inverted pole. Howard Gutowitz is Collaborateur in the Service de Physique du Solide et Résonance Magnetique, Commissariat a I'Energie Atomique, Saclay, France.

  15. Algorithmic support for graphic images rotation in avionics

    NASA Astrophysics Data System (ADS)

    Kniga, E. V.; Gurjanov, A. V.; Shukalov, A. V.; Zharinov, I. O.

    2018-05-01

    The avionics device designing has an actual problem of development and research algorithms to rotate the images which are being shown in the on-board display. The image rotation algorithms are a part of program software of avionics devices, which are parts of the on-board computers of the airplanes and helicopters. Images to be rotated have the flight location map fragments. The image rotation in the display system can be done as a part of software or mechanically. The program option is worse than the mechanic one in its rotation speed. The comparison of some test images of rotation several algorithms is shown which are being realized mechanically with the program environment Altera QuartusII.

  16. A Prize-Collecting Steiner Tree Approach for Transduction Network Inference

    NASA Astrophysics Data System (ADS)

    Bailly-Bechet, Marc; Braunstein, Alfredo; Zecchina, Riccardo

    Into the cell, information from the environment is mainly propagated via signaling pathways which form a transduction network. Here we propose a new algorithm to infer transduction networks from heterogeneous data, using both the protein interaction network and expression datasets. We formulate the inference problem as an optimization task, and develop a message-passing, probabilistic and distributed formalism to solve it. We apply our algorithm to the pheromone response in the baker’s yeast S. cerevisiae. We are able to find the backbone of the known structure of the MAPK cascade of pheromone response, validating our algorithm. More importantly, we make biological predictions about some proteins whose role could be at the interface between pheromone response and other cellular functions.

  17. A parallelization scheme of the periodic signals tracking algorithm for isochronous mass spectrometry on GPUs

    NASA Astrophysics Data System (ADS)

    Chen, R. J.; Wang, M.; Yan, X. L.; Yang, Q.; Lam, Y. H.; Yang, L.; Zhang, Y. H.

    2017-12-01

    The periodic signals tracking algorithm has been used to determine the revolution times of ions stored in storage rings in isochronous mass spectrometry (IMS) experiments. It has been a challenge to perform real-time data analysis by using the periodic signals tracking algorithm in the IMS experiments. In this paper, a parallelization scheme of the periodic signals tracking algorithm is introduced and a new program is developed. The computing time of data analysis can be reduced by a factor of ∼71 and of ∼346 by using our new program on Tesla C1060 GPU and Tesla K20c GPU, compared to using old program on Xeon E5540 CPU. We succeed in performing real-time data analysis for the IMS experiments by using the new program on Tesla K20c GPU.

  18. A novel method for identifying disease associated protein complexes based on functional similarity protein complex networks.

    PubMed

    Le, Duc-Hau

    2015-01-01

    Protein complexes formed by non-covalent interaction among proteins play important roles in cellular functions. Computational and purification methods have been used to identify many protein complexes and their cellular functions. However, their roles in terms of causing disease have not been well discovered yet. There exist only a few studies for the identification of disease-associated protein complexes. However, they mostly utilize complicated heterogeneous networks which are constructed based on an out-of-date database of phenotype similarity network collected from literature. In addition, they only apply for diseases for which tissue-specific data exist. In this study, we propose a method to identify novel disease-protein complex associations. First, we introduce a framework to construct functional similarity protein complex networks where two protein complexes are functionally connected by either shared protein elements, shared annotating GO terms or based on protein interactions between elements in each protein complex. Second, we propose a simple but effective neighborhood-based algorithm, which yields a local similarity measure, to rank disease candidate protein complexes. Comparing the predictive performance of our proposed algorithm with that of two state-of-the-art network propagation algorithms including one we used in our previous study, we found that it performed statistically significantly better than that of these two algorithms for all the constructed functional similarity protein complex networks. In addition, it ran about 32 times faster than these two algorithms. Moreover, our proposed method always achieved high performance in terms of AUC values irrespective of the ways to construct the functional similarity protein complex networks and the used algorithms. The performance of our method was also higher than that reported in some existing methods which were based on complicated heterogeneous networks. Finally, we also tested our method with prostate cancer and selected the top 100 highly ranked candidate protein complexes. Interestingly, 69 of them were evidenced since at least one of their protein elements are known to be associated with prostate cancer. Our proposed method, including the framework to construct functional similarity protein complex networks and the neighborhood-based algorithm on these networks, could be used for identification of novel disease-protein complex associations.

  19. Flight Testing of the Space Launch System (SLS) Adaptive Augmenting Control (AAC) Algorithm on an F/A-18

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.; VanZwieten, Tannen S.; Hanson, Curtis E.; Wall, John H.; Miller, Chris J.; Gilligan, Eric T.; Orr, Jeb S.

    2014-01-01

    The Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an adaptive augmenting control (AAC) algorithm for launch vehicles that improves robustness and performance on an as-needed basis by adapting a classical control algorithm to unexpected environments or variations in vehicle dynamics. This was baselined as part of the Space Launch System (SLS) flight control system. The NASA Engineering and Safety Center (NESC) was asked to partner with the SLS Program and the Space Technology Mission Directorate (STMD) Game Changing Development Program (GCDP) to flight test the AAC algorithm on a manned aircraft that can achieve a high level of dynamic similarity to a launch vehicle and raise the technology readiness of the algorithm early in the program. This document reports the outcome of the NESC assessment.

  20. MT's algorithm: A new algorithm to search for the optimum set of modulation indices for simultaneous range, command, and telemetry

    NASA Technical Reports Server (NTRS)

    Nguyen, Tien Manh

    1989-01-01

    MT's algorithm was developed as an aid in the design of space telecommunications systems when utilized with simultaneous range/command/telemetry operations. This algorithm provides selection of modulation indices for: (1) suppression of undesired signals to achieve desired link performance margins and/or to allow for a specified performance degradation in the data channel (command/telemetry) due to the presence of undesired signals (interferers); and (2) optimum power division between the carrier, the range, and the data channel. A software program using this algorithm was developed for use with MathCAD software. This software program, called the MT program, provides the computation of optimum modulation indices for all possible cases that are recommended by the Consultative Committee on Space Data System (CCSDS) (with emphasis on the squarewave, NASA/JPL ranging system).

  1. Comparison of the three optical platforms for measurement of cellular respiration.

    PubMed

    Kondrashina, Alina V; Ogurtsov, Vladimir I; Papkovsky, Dmitri B

    2015-01-01

    We compared three optical platforms for measurement of cellular respiration: absolute oxygen consumption rates (OCRs) in hermetically sealed microcuvettes, relative OCRs measured in a 96-well plate with oil seal, and steady-state oxygenation of cells in an open 96-well plate. Using mouse embryonic fibroblasts cell line, the phosphorescent intracellular O2 probe MitoXpress-Intra, and time-resolved fluorescence reader, we determined algorithms for conversion of relative OCRs and cell oxygenation into absolute OCRs, thereby allowing simple high-throughput measurement of absolute OCR values. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Evolving binary classifiers through parallel computation of multiple fitness cases.

    PubMed

    Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni

    2005-06-01

    This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.

  3. Simulations For Investigating the Contrast Mechanism of Biological Cells with High Frequency Scanning Acoustic Microscopy

    NASA Astrophysics Data System (ADS)

    Juntarapaso, Yada

    Scanning Acoustic Microscopy (SAM) is one of the most powerful techniques for nondestructive evaluation and it is a promising tool for characterizing the elastic properties of biological tissues/cells. Exploring a single cell is important since there is a connection between single cell biomechanics and human cancer. Scanning acoustic microscopy (SAM) has been accepted and extensively utilized for acoustical cellular and tissue imaging including measurements of the mechanical and elastic properties of biological specimens. SAM provides superb advantages in that it is non-invasive, can measure mechanical properties of biological cells or tissues, and fixation/chemical staining is not necessary. The first objective of this research is to develop a program for simulating the images and contrast mechanism obtained by high-frequency SAM. Computer simulation algorithms based on MatlabRTM were built for simulating the images and contrast mechanisms. The mechanical properties of HeLa and MCF-7 cells were computed from the measurement data of the output signal amplitude as a function of distance from the focal planes of the acoustics lens which is known as V(z) . Algorithms for simulating V(z) responses involved the calculation of the reflectance function and were created based on ray theory and wave theory. The second objective is to design transducer arrays for SAM. Theoretical simulations based on Field II(c) programs of the high frequency ultrasound array designs were performed to enhance image resolution and volumetric imaging capabilities. Phased array beam forming and dynamic apodization and focusing were employed in the simulations. The new transducer array design will be state-of-the-art in improving the performance of SAM by electronic scanning and potentially providing a 4-D image of the specimen.

  4. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  5. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.

  6. ASTEP user's guide and software documentation

    NASA Technical Reports Server (NTRS)

    Gliniewicz, A. S.; Lachowski, H. M.; Pace, W. H., Jr.; Salvato, P., Jr.

    1974-01-01

    The Algorithm Simulation Test and Evaluation Program (ASTEP) is a modular computer program developed for the purpose of testing and evaluating methods of processing remotely sensed multispectral scanner earth resources data. ASTEP is written in FORTRAND V on the UNIVAC 1110 under the EXEC 8 operating system and may be operated in either a batch or interactive mode. The program currently contains over one hundred subroutines consisting of data classification and display algorithms, statistical analysis algorithms, utility support routines, and feature selection capability. The current program can accept data in LARSC1, LARSC2, ERTS, and Universal formats, and can output processed image or data tapes in Universal format.

  7. Artificial intelligence programming with LabVIEW: genetic algorithms for instrumentation control and optimization.

    PubMed

    Moore, J H

    1995-06-01

    A genetic algorithm for instrumentation control and optimization was developed using the LabVIEW graphical programming environment. The usefulness of this methodology for the optimization of a closed loop control instrument is demonstrated with minimal complexity and the programming is presented in detail to facilitate its adaptation to other LabVIEW applications. Closed loop control instruments have variety of applications in the biomedical sciences including the regulation of physiological processes such as blood pressure. The program presented here should provide a useful starting point for those wishing to incorporate genetic algorithm approaches to LabVIEW mediated optimization of closed loop control instruments.

  8. Multiscale simulations of anisotropic particles combining molecular dynamics and Green's function reaction dynamics

    NASA Astrophysics Data System (ADS)

    Vijaykumar, Adithya; Ouldridge, Thomas E.; ten Wolde, Pieter Rein; Bolhuis, Peter G.

    2017-03-01

    The modeling of complex reaction-diffusion processes in, for instance, cellular biochemical networks or self-assembling soft matter can be tremendously sped up by employing a multiscale algorithm which combines the mesoscopic Green's Function Reaction Dynamics (GFRD) method with explicit stochastic Brownian, Langevin, or deterministic molecular dynamics to treat reactants at the microscopic scale [A. Vijaykumar, P. G. Bolhuis, and P. R. ten Wolde, J. Chem. Phys. 143, 214102 (2015)]. Here we extend this multiscale MD-GFRD approach to include the orientational dynamics that is crucial to describe the anisotropic interactions often prevalent in biomolecular systems. We present the novel algorithm focusing on Brownian dynamics only, although the methodology is generic. We illustrate the novel algorithm using a simple patchy particle model. After validation of the algorithm, we discuss its performance. The rotational Brownian dynamics MD-GFRD multiscale method will open up the possibility for large scale simulations of protein signalling networks.

  9. Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications.

    PubMed

    Tsuruta, S; Misztal, I; Strandén, I

    2001-05-01

    Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.

  10. An intelligent allocation algorithm for parallel processing

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Homaifar, Abdollah; Ananthram, Kishan G.

    1988-01-01

    The problem of allocating nodes of a program graph to processors in a parallel processing architecture is considered. The algorithm is based on critical path analysis, some allocation heuristics, and the execution granularity of nodes in a program graph. These factors, and the structure of interprocessor communication network, influence the allocation. To achieve realistic estimations of the executive durations of allocations, the algorithm considers the fact that nodes in a program graph have to communicate through varying numbers of tokens. Coarse and fine granularities have been implemented, with interprocessor token-communication duration, varying from zero up to values comparable to the execution durations of individual nodes. The effect on allocation of communication network structures is demonstrated by performing allocations for crossbar (non-blocking) and star (blocking) networks. The algorithm assumes the availability of as many processors as it needs for the optimal allocation of any program graph. Hence, the focus of allocation has been on varying token-communication durations rather than varying the number of processors. The algorithm always utilizes as many processors as necessary for the optimal allocation of any program graph, depending upon granularity and characteristics of the interprocessor communication network.

  11. Non-invasive quality evaluation of confluent cells by image-based orientation heterogeneity analysis.

    PubMed

    Sasaki, Kei; Sasaki, Hiroto; Takahashi, Atsuki; Kang, Siu; Yuasa, Tetsuya; Kato, Ryuji

    2016-02-01

    In recent years, cell and tissue therapy in regenerative medicine have advanced rapidly towards commercialization. However, conventional invasive cell quality assessment is incompatible with direct evaluation of the cells produced for such therapies, especially in the case of regenerative medicine products. Our group has demonstrated the potential of quantitative assessment of cell quality, using information obtained from cell images, for non-invasive real-time evaluation of regenerative medicine products. However, image of cells in the confluent state are often difficult to evaluate, because accurate recognition of cells is technically difficult and the morphological features of confluent cells are non-characteristic. To overcome these challenges, we developed a new image-processing algorithm, heterogeneity of orientation (H-Orient) processing, to describe the heterogeneous density of cells in the confluent state. In this algorithm, we introduced a Hessian calculation that converts pixel intensity data to orientation data and a statistical profiling calculation that evaluates the heterogeneity of orientations within an image, generating novel parameters that yield a quantitative profile of an image. Using such parameters, we tested the algorithm's performance in discriminating different qualities of cellular images with three types of clinically important cell quality check (QC) models: remaining lifespan check (QC1), manipulation error check (QC2), and differentiation potential check (QC3). Our results show that our orientation analysis algorithm could predict with high accuracy the outcomes of all types of cellular quality checks (>84% average accuracy with cross-validation). Copyright © 2015 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  12. Separation of left and right lungs using 3D information of sequential CT images and a guided dynamic programming algorithm

    PubMed Central

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104

  13. Separation of left and right lungs using 3-dimensional information of sequential computed tomography images and a guided dynamic programming algorithm.

    PubMed

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.

  14. Automatic Program Synthesis Reports.

    ERIC Educational Resources Information Center

    Biermann, A. W.; And Others

    Some of the major results of future goals of an automatic program synthesis project are described in the two papers that comprise this document. The first paper gives a detailed algorithm for synthesizing a computer program from a trace of its behavior. Since the algorithm involves a search, the length of time required to do the synthesis of…

  15. Generation of Non-Homogeneous Poisson Processes by Thinning: Programming Considerations and Comparision with Competing Algorithms.

    DTIC Science & Technology

    1978-12-01

    Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution

  16. Reducing the Volume of NASA Earth-Science Data

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre

    2010-01-01

    A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.

  17. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm.

    PubMed

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim

    2017-06-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.

  18. Fast and Adaptive Auto-focusing Microscope

    NASA Astrophysics Data System (ADS)

    Obara, Takeshi; Igarashi, Yasunobu; Hashimoto, Koichi

    Optical microscopes are widely used in biological and medical researches. By using the microscope, we can observe cellular movements including intracellular ions and molecules tagged with fluorescent dyes at a high magnification. However, a freely motile cell easily escapes from a 3D field of view of the typical microscope. Therefore, we propose a novel auto-focusing algorithm and develop a auto-focusing and tracking microscope. XYZ positions of a microscopic stage are feedback controlled to focus and track the cell automatically. A bright-field image is used to estimate a cellular position. XY centroids are used to estimate XY positions of the tracked cell. To estimate Z position, we use a diffraction pattern around the cell membrane. This estimation method is so-called Depth from Diffraction (DFDi). However, this method is not robust for individual differences between cells because the diffraction pattern depends on each cellular shape. Therefore, in this study, we propose a real-time correction of DFDi by using 2D Laplacian of an intracellular area as a goodness of the focus. To evaluate the performance of our developed algorithm and microscope, we auto-focus and track a freely moving paramecium. In this experimental result, the paramecium is auto-focused and kept inside the scope of the microscope during 45s. The evaluated focal error is within 5µm, while a length and a thickness of the paramecium are about 200µm and 50µm, respectively.

  19. High-Content, High-Throughput Screening for the Identification of Cytotoxic Compounds Based on Cell Morphology and Cell Proliferation Markers

    PubMed Central

    Martin, Heather L.; Adams, Matthew; Higgins, Julie; Bond, Jacquelyn; Morrison, Ewan E.; Bell, Sandra M.; Warriner, Stuart; Nelson, Adam; Tomlinson, Darren C.

    2014-01-01

    Toxicity is a major cause of failure in drug discovery and development, and whilst robust toxicological testing occurs, efficiency could be improved if compounds with cytotoxic characteristics were identified during primary compound screening. The use of high-content imaging in primary screening is becoming more widespread, and by utilising phenotypic approaches it should be possible to incorporate cytotoxicity counter-screens into primary screens. Here we present a novel phenotypic assay that can be used as a counter-screen to identify compounds with adverse cellular effects. This assay has been developed using U2OS cells, the PerkinElmer Operetta high-content/high-throughput imaging system and Columbus image analysis software. In Columbus, algorithms were devised to identify changes in nuclear morphology, cell shape and proliferation using DAPI, TOTO-3 and phosphohistone H3 staining, respectively. The algorithms were developed and tested on cells treated with doxorubicin, taxol and nocodazole. The assay was then used to screen a novel, chemical library, rich in natural product-like molecules of over 300 compounds, 13.6% of which were identified as having adverse cellular effects. This assay provides a relatively cheap and rapid approach for identifying compounds with adverse cellular effects during screening assays, potentially reducing compound rejection due to toxicity in subsequent in vitro and in vivo assays. PMID:24505478

  20. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm

    PubMed Central

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim

    2018-01-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035

  1. Optimized programming algorithm for cylindrical and directional deep brain stimulation electrodes.

    PubMed

    Anderson, Daria Nesterovich; Osting, Braxton; Vorwerk, Johannes; Dorval, Alan D; Butson, Christopher R

    2018-04-01

    Deep brain stimulation (DBS) is a growing treatment option for movement and psychiatric disorders. As DBS technology moves toward directional leads with increased numbers of smaller electrode contacts, trial-and-error methods of manual DBS programming are becoming too time-consuming for clinical feasibility. We propose an algorithm to automate DBS programming in near real-time for a wide range of DBS lead designs. Magnetic resonance imaging and diffusion tensor imaging are used to build finite element models that include anisotropic conductivity. The algorithm maximizes activation of target tissue and utilizes the Hessian matrix of the electric potential to approximate activation of neurons in all directions. We demonstrate our algorithm's ability in an example programming case that targets the subthalamic nucleus (STN) for the treatment of Parkinson's disease for three lead designs: the Medtronic 3389 (four cylindrical contacts), the direct STNAcute (two cylindrical contacts, six directional contacts), and the Medtronic-Sapiens lead (40 directional contacts). The optimization algorithm returns patient-specific contact configurations in near real-time-less than 10 s for even the most complex leads. When the lead was placed centrally in the target STN, the directional leads were able to activate over 50% of the region, whereas the Medtronic 3389 could activate only 40%. When the lead was placed 2 mm lateral to the target, the directional leads performed as well as they did in the central position, but the Medtronic 3389 activated only 2.9% of the STN. This DBS programming algorithm can be applied to cylindrical electrodes as well as novel directional leads that are too complex with modern technology to be manually programmed. This algorithm may reduce clinical programming time and encourage the use of directional leads, since they activate a larger volume of the target area than cylindrical electrodes in central and off-target lead placements.

  2. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  3. On distribution reduction and algorithm implementation in inconsistent ordered information systems.

    PubMed

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.

  4. Finite pure integer programming algorithms employing only hyperspherically deduced cuts

    NASA Technical Reports Server (NTRS)

    Young, R. D.

    1971-01-01

    Three algorithms are developed that may be based exclusively on hyperspherically deduced cuts. The algorithms only apply, therefore, to problems structured so that these cuts are valid. The algorithms are shown to be finite.

  5. Multi-objective optimisation and decision-making of space station logistics strategies

    NASA Astrophysics Data System (ADS)

    Zhu, Yue-he; Luo, Ya-zhong

    2016-10-01

    Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.

  6. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amarasinghe, Saman

    This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less

  7. Development of program package for investigation and modeling of carbon nanostructures in diamond like carbon films with the help of Raman scattering and infrared absorption spectra line resolving

    NASA Astrophysics Data System (ADS)

    Hayrapetyan, David B.; Hovhannisyan, Levon; Mantashyan, Paytsar A.

    2013-04-01

    The analysis of complex spectra is an actual problem for modern science. The work is devoted to the creation of a software package, which analyzes spectrum in the different formats, possesses by dynamic knowledge database and self-study mechanism, performs automated analysis of the spectra compound based on knowledge database by application of certain algorithms. In the software package as searching systems, hyper-spherical random search algorithms, gradient algorithms and genetic searching algorithms were used. The analysis of Raman and IR spectrum of diamond-like carbon (DLC) samples were performed by elaborated program. After processing the data, the program immediately displays all the calculated parameters of DLC.

  8. Star adaptation for two-algorithms used on serial computers

    NASA Technical Reports Server (NTRS)

    Howser, L. M.; Lambiotte, J. J., Jr.

    1974-01-01

    Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.

  9. Quantitative fluorescence microscopy and image deconvolution.

    PubMed

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.

  10. Open-cycle systems performance analysis programming guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, D.A.

    1981-12-01

    The Open-Cycle OTEC Systems Performance Analysis Program is an algorithm programmed on SERI's CDC Cyber 170/720 computer to predict the performance of a Claude-cycle, open-cycle OTEC plant. The algorithm models the Claude-cycle system as consisting of an evaporator, a turbine, a condenser, deaerators, a condenser gas exhaust, a cold water pipe and cold and warm seawater pumps. Each component is a separate subroutine in the main program. A description is given of how to write Fortran subroutines to fit into the main program for the components of the OTEC plant. An explanation is provided of how to use the algorithm.more » The main program and existing component subroutines are described. Appropriate common blocks and input and output variables are listed. Preprogrammed thermodynamic property functions for steam, fresh water, and seawater are described.« less

  11. Algorithms and programming tools for image processing on the MPP, part 2

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1986-01-01

    A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.

  12. Development of Educational Support System for Algorithm using Flowchart

    NASA Astrophysics Data System (ADS)

    Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta

    Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.

  13. Development of a validation model for the defense meteorological satellite program's special sensor microwave imager

    NASA Technical Reports Server (NTRS)

    Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.

    1990-01-01

    The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.

  14. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    PubMed

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  15. Simulation of root forms using cellular automata model

    NASA Astrophysics Data System (ADS)

    Winarno, Nanang; Prima, Eka Cahya; Afifah, Ratih Mega Ayu

    2016-02-01

    This research aims to produce a simulation program for root forms using cellular automata model. Stephen Wolfram in his book entitled "A New Kind of Science" discusses the formation rules based on the statistical analysis. In accordance with Stephen Wolfram's investigation, the research will develop a basic idea of computer program using Delphi 7 programming language. To best of our knowledge, there is no previous research developing a simulation describing root forms using the cellular automata model compared to the natural root form with the presence of stone addition as the disturbance. The result shows that (1) the simulation used four rules comparing results of the program towards the natural photographs and each rule had shown different root forms; (2) the stone disturbances prevent the root growth and the multiplication of root forms had been successfully modeled. Therefore, this research had added some stones, which have size of 120 cells placed randomly in the soil. Like in nature, stones cannot be penetrated by plant roots. The result showed that it is very likely to further develop the program of simulating root forms by 50 variations.

  16. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1989-01-01

    Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.

  17. Image Processing Algorithms in the Secondary School Programming Education

    ERIC Educational Resources Information Center

    Gerják, István

    2017-01-01

    Learning computer programming for students of the age of 14-18 is difficult and requires endurance and engagement. Being familiar with the syntax of a computer language and writing programs in it are challenges for youngsters, not to mention that understanding algorithms is also a big challenge. To help students in the learning process, teachers…

  18. Density-based clustering analyses to identify heterogeneous cellular sub-populations

    NASA Astrophysics Data System (ADS)

    Heaster, Tiffany M.; Walsh, Alex J.; Landman, Bennett A.; Skala, Melissa C.

    2017-02-01

    Autofluorescence microscopy of NAD(P)H and FAD provides functional metabolic measurements at the single-cell level. Here, density-based clustering algorithms were applied to metabolic autofluorescence measurements to identify cell-level heterogeneity in tumor cell cultures. The performance of the density-based clustering algorithm, DENCLUE, was tested in samples with known heterogeneity (co-cultures of breast carcinoma lines). DENCLUE was found to better represent the distribution of cell clusters compared to Gaussian mixture modeling. Overall, DENCLUE is a promising approach to quantify cell-level heterogeneity, and could be used to understand single cell population dynamics in cancer progression and treatment.

  19. Two-Step Fair Scheduling of Continuous Media Streams over Error-Prone Wireless Channels

    NASA Astrophysics Data System (ADS)

    Oh, Soohyun; Lee, Jin Wook; Park, Taejoon; Jo, Tae-Chang

    In wireless cellular networks, streaming of continuous media (with strict QoS requirements) over wireless links is challenging due to their inherent unreliability characterized by location-dependent, bursty errors. To address this challenge, we present a two-step scheduling algorithm for a base station to provide streaming of continuous media to wireless clients over the error-prone wireless links. The proposed algorithm is capable of minimizing the packet loss rate of individual clients in the presence of error bursts, by transmitting packets in the round-robin manner and also adopting a mechanism for channel prediction and swapping.

  20. Mass Conservation and Inference of Metabolic Networks from High-Throughput Mass Spectrometry Data

    PubMed Central

    Bandaru, Pradeep; Bansal, Mukesh

    2011-01-01

    Abstract We present a step towards the metabolome-wide computational inference of cellular metabolic reaction networks from metabolic profiling data, such as mass spectrometry. The reconstruction is based on identification of irreducible statistical interactions among the metabolite activities using the ARACNE reverse-engineering algorithm and on constraining possible metabolic transformations to satisfy the conservation of mass. The resulting algorithms are validated on synthetic data from an abridged computational model of Escherichia coli metabolism. Precision rates upwards of 50% are routinely observed for identification of full metabolic reactions, and recalls upwards of 20% are also seen. PMID:21314454

  1. Inferring Boolean network states from partial information

    PubMed Central

    2013-01-01

    Networks of molecular interactions regulate key processes in living cells. Therefore, understanding their functionality is a high priority in advancing biological knowledge. Boolean networks are often used to describe cellular networks mathematically and are fitted to experimental datasets. The fitting often results in ambiguities since the interpretation of the measurements is not straightforward and since the data contain noise. In order to facilitate a more reliable mapping between datasets and Boolean networks, we develop an algorithm that infers network trajectories from a dataset distorted by noise. We analyze our algorithm theoretically and demonstrate its accuracy using simulation and microarray expression data. PMID:24006954

  2. Semiautomated hybrid algorithm for estimation of three-dimensional liver surface in CT using dynamic cellular automata and level-sets.

    PubMed

    Dakua, Sarada Prasad; Abinahed, Julien; Al-Ansari, Abdulla

    2015-04-01

    Liver segmentation continues to remain a major challenge, largely due to its intense complexity with surrounding anatomical structures (stomach, kidney, and heart), high noise level and lack of contrast in pathological computed tomography (CT) data. We present an approach to reconstructing the liver surface in low contrast CT. The main contributions are: (1) a stochastic resonance-based methodology in discrete cosine transform domain is developed to enhance the contrast of pathological liver images, (2) a new formulation is proposed to prevent the object boundary, resulting from the cellular automata method, from leaking into the surrounding areas of similar intensity, and (3) a level-set method is suggested to generate intermediate segmentation contours from two segmented slices distantly located in a subject sequence. We have tested the algorithm on real datasets obtained from two sources, Hamad General Hospital and medical image computing and computer-assisted interventions grand challenge workshop. Various parameters in the algorithm, such as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], play imperative roles, thus their values are precisely selected. Both qualitative and quantitative evaluation performed on liver data show promising segmentation accuracy when compared with ground truth data reflecting the potential of the proposed method.

  3. Genetic programs constructed from layered logic gates in single cells

    PubMed Central

    Moon, Tae Seok; Lou, Chunbo; Tamsir, Alvin; Stanton, Brynne C.; Voigt, Christopher A.

    2014-01-01

    Genetic programs function to integrate environmental sensors, implement signal processing algorithms and control expression dynamics1. These programs consist of integrated genetic circuits that individually implement operations ranging from digital logic to dynamic circuits2–6, and they have been used in various cellular engineering applications, including the implementation of process control in metabolic networks and the coordination of spatial differentiation in artificial tissues. A key limitation is that the circuits are based on biochemical interactions occurring in the confined volume of the cell, so the size of programs has been limited to a few circuits1,7. Here we apply part mining and directed evolution to build a set of transcriptional AND gates in Escherichia coli. Each AND gate integrates two promoter inputs and controls one promoter output. This allows the gates to be layered by having the output promoter of an upstream circuit serve as the input promoter for a downstream circuit. Each gate consists of a transcription factor that requires a second chaperone protein to activate the output promoter. Multiple activator–chaperone pairs are identified from type III secretion pathways in different strains of bacteria. Directed evolution is applied to increase the dynamic range and orthogonality of the circuits. These gates are connected in different permutations to form programs, the largest of which is a 4-input AND gate that consists of 3 circuits that integrate 4 inducible systems, thus requiring 11 regulatory proteins. Measuring the performance of individual gates is sufficient to capture the behaviour of the complete program. Errors in the output due to delays (faults), a common problem for layered circuits, are not observed. This work demonstrates the successful layering of orthogonal logic gates, a design strategy that could enable the construction of large, integrated circuits in single cells. PMID:23041931

  4. The Library of Integrated Network-Based Cellular Signatures NIH Program: System-Level Cataloging of Human Cells Response to Perturbations.

    PubMed

    Keenan, Alexandra B; Jenkins, Sherry L; Jagodnik, Kathleen M; Koplev, Simon; He, Edward; Torre, Denis; Wang, Zichen; Dohlman, Anders B; Silverstein, Moshe C; Lachmann, Alexander; Kuleshov, Maxim V; Ma'ayan, Avi; Stathias, Vasileios; Terryn, Raymond; Cooper, Daniel; Forlin, Michele; Koleti, Amar; Vidovic, Dusica; Chung, Caty; Schürer, Stephan C; Vasiliauskas, Jouzas; Pilarczyk, Marcin; Shamsaei, Behrouz; Fazel, Mehdi; Ren, Yan; Niu, Wen; Clark, Nicholas A; White, Shana; Mahi, Naim; Zhang, Lixia; Kouril, Michal; Reichard, John F; Sivaganesan, Siva; Medvedovic, Mario; Meller, Jaroslaw; Koch, Rick J; Birtwistle, Marc R; Iyengar, Ravi; Sobie, Eric A; Azeloglu, Evren U; Kaye, Julia; Osterloh, Jeannette; Haston, Kelly; Kalra, Jaslin; Finkbiener, Steve; Li, Jonathan; Milani, Pamela; Adam, Miriam; Escalante-Chong, Renan; Sachs, Karen; Lenail, Alex; Ramamoorthy, Divya; Fraenkel, Ernest; Daigle, Gavin; Hussain, Uzma; Coye, Alyssa; Rothstein, Jeffrey; Sareen, Dhruv; Ornelas, Loren; Banuelos, Maria; Mandefro, Berhan; Ho, Ritchie; Svendsen, Clive N; Lim, Ryan G; Stocksdale, Jennifer; Casale, Malcolm S; Thompson, Terri G; Wu, Jie; Thompson, Leslie M; Dardov, Victoria; Venkatraman, Vidya; Matlock, Andrea; Van Eyk, Jennifer E; Jaffe, Jacob D; Papanastasiou, Malvina; Subramanian, Aravind; Golub, Todd R; Erickson, Sean D; Fallahi-Sichani, Mohammad; Hafner, Marc; Gray, Nathanael S; Lin, Jia-Ren; Mills, Caitlin E; Muhlich, Jeremy L; Niepel, Mario; Shamu, Caroline E; Williams, Elizabeth H; Wrobel, David; Sorger, Peter K; Heiser, Laura M; Gray, Joe W; Korkola, James E; Mills, Gordon B; LaBarge, Mark; Feiler, Heidi S; Dane, Mark A; Bucher, Elmar; Nederlof, Michel; Sudar, Damir; Gross, Sean; Kilburn, David F; Smith, Rebecca; Devlin, Kaylyn; Margolis, Ron; Derr, Leslie; Lee, Albert; Pillai, Ajay

    2018-01-24

    The Library of Integrated Network-Based Cellular Signatures (LINCS) is an NIH Common Fund program that catalogs how human cells globally respond to chemical, genetic, and disease perturbations. Resources generated by LINCS include experimental and computational methods, visualization tools, molecular and imaging data, and signatures. By assembling an integrated picture of the range of responses of human cells exposed to many perturbations, the LINCS program aims to better understand human disease and to advance the development of new therapies. Perturbations under study include drugs, genetic perturbations, tissue micro-environments, antibodies, and disease-causing mutations. Responses to perturbations are measured by transcript profiling, mass spectrometry, cell imaging, and biochemical methods, among other assays. The LINCS program focuses on cellular physiology shared among tissues and cell types relevant to an array of diseases, including cancer, heart disease, and neurodegenerative disorders. This Perspective describes LINCS technologies, datasets, tools, and approaches to data accessibility and reusability. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Ant Lion Optimization algorithm for kidney exchanges.

    PubMed

    Hamouda, Eslam; El-Metwally, Sara; Tarek, Mayada

    2018-01-01

    The kidney exchange programs bring new insights in the field of organ transplantation. They make the previously not allowed surgery of incompatible patient-donor pairs easier to be performed on a large scale. Mathematically, the kidney exchange is an optimization problem for the number of possible exchanges among the incompatible pairs in a given pool. Also, the optimization modeling should consider the expected quality-adjusted life of transplant candidates and the shortage of computational and operational hospital resources. In this article, we introduce a bio-inspired stochastic-based Ant Lion Optimization, ALO, algorithm to the kidney exchange space to maximize the number of feasible cycles and chains among the pool pairs. Ant Lion Optimizer-based program achieves comparable kidney exchange results to the deterministic-based approaches like integer programming. Also, ALO outperforms other stochastic-based methods such as Genetic Algorithm in terms of the efficient usage of computational resources and the quantity of resulting exchanges. Ant Lion Optimization algorithm can be adopted easily for on-line exchanges and the integration of weights for hard-to-match patients, which will improve the future decisions of kidney exchange programs. A reference implementation for ALO algorithm for kidney exchanges is written in MATLAB and is GPL licensed. It is available as free open-source software from: https://github.com/SaraEl-Metwally/ALO_algorithm_for_Kidney_Exchanges.

  6. Optimization algorithms for large-scale multireservoir hydropower systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiew, K.L.

    Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another.more » The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.« less

  7. Algorithms and programming tools for image processing on the MPP:3

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1987-01-01

    This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.

  8. Introducing difference recurrence relations for faster semi-global alignment of long sequences.

    PubMed

    Suzuki, Hajime; Kasahara, Masahiro

    2018-02-19

    The read length of single-molecule DNA sequencers is reaching 1 Mb. Popular alignment software tools widely used for analyzing such long reads often take advantage of single-instruction multiple-data (SIMD) operations to accelerate calculation of dynamic programming (DP) matrices in the Smith-Waterman-Gotoh (SWG) algorithm with a fixed alignment start position at the origin. Nonetheless, 16-bit or 32-bit integers are necessary for storing the values in a DP matrix when sequences to be aligned are long; this situation hampers the use of the full SIMD width of modern processors. We proposed a faster semi-global alignment algorithm, "difference recurrence relations," that runs more rapidly than the state-of-the-art algorithm by a factor of 2.1. Instead of calculating and storing all the values in a DP matrix directly, our algorithm computes and stores mainly the differences between the values of adjacent cells in the matrix. Although the SWG algorithm and our algorithm can output exactly the same result, our algorithm mainly involves 8-bit integer operations, enabling us to exploit the full width of SIMD operations (e.g., 32) on modern processors. We also developed a library, libgaba, so that developers can easily integrate our algorithm into alignment programs. Our novel algorithm and optimized library implementation will facilitate accelerating nucleotide long-read analysis algorithms that use pairwise alignment stages. The library is implemented in the C programming language and available at https://github.com/ocxtal/libgaba .

  9. Sizing of complex structure by the integration of several different optimal design algorithms

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.

    1974-01-01

    Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.

  10. KB3D Reference Manual. Version 1.a

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Siminiceanu, Radu; Carreno, Victor A.; Dowek, Gilles

    2005-01-01

    This paper is a reference manual describing the implementation of the KB3D conflict detection and resolution algorithm. The algorithm has been implemented in the Java and C++ programming languages. The reference manual gives a short overview of the detection and resolution functions, the structural implementation of the program, inputs and outputs to the program, and describes how the program is used. Inputs to the program can be rectangular coordinates or geodesic coordinates. The reference manual also gives examples of conflict scenarios and the resolution outputs the program produces.

  11. Message Efficient Checkpointing and Rollback Recovery in Heterogeneous Mobile Networks

    NASA Astrophysics Data System (ADS)

    Jaggi, Parmeet Kaur; Singh, Awadhesh Kumar

    2016-06-01

    Heterogeneous networks provide an appealing way of expanding the computing capability of mobile networks by combining infrastructure-less mobile ad-hoc networks with the infrastructure-based cellular mobile networks. The nodes in such a network range from low-power nodes to macro base stations and thus, vary greatly in their capabilities such as computation power and battery power. The nodes are susceptible to different types of transient and permanent failures and therefore, the algorithms designed for such networks need to be fault-tolerant. The article presents a checkpointing algorithm for the rollback recovery of mobile hosts in a heterogeneous mobile network. Checkpointing is a well established approach to provide fault tolerance in static and cellular mobile distributed systems. However, the use of checkpointing for fault tolerance in a heterogeneous environment remains to be explored. The proposed protocol is based on the results of zigzag paths and zigzag cycles by Netzer-Xu. Considering the heterogeneity prevalent in the network, an uncoordinated checkpointing technique is employed. Yet, useless checkpoints are avoided without causing a high message overhead.

  12. Radio Resource Allocation on Complex 4G Wireless Cellular Networks

    NASA Astrophysics Data System (ADS)

    Psannis, Kostas E.

    2015-09-01

    In this article we consider the heuristic algorithm which improves step by step wireless data delivery over LTE cellular networks by using the total transmit power with the constraint on users’ data rates, and the total throughput with the constraints on the total transmit power as well as users’ data rates, which are jointly integrated into a hybrid-layer design framework to perform radio resource allocation for multiple users, and to effectively decide the optimal system parameter such as modulation and coding scheme (MCS) in order to adapt to the varying channel quality. We propose new heuristic algorithm which balances the accessible data rate, the initial data rates of each user allocated by LTE scheduler, the priority indicator which signals delay- throughput- packet loss awareness of the user, and the buffer fullness by achieving maximization of radio resource allocation for multiple users. It is noted that the overall performance is improved with the increase in the number of users, due to multiuser diversity. Experimental results illustrate and validate the accuracy of the proposed methodology.

  13. Mapping Thermal Habitat of Ectotherms Based on Behavioral Thermoregulation in a Controlled Thermal Environment

    NASA Astrophysics Data System (ADS)

    Fei, T.; Skidmore, A.; Liu, Y.

    2012-07-01

    Thermal environment is especially important to ectotherm because a lot of physiological functions rely on the body temperature such as thermoregulation. The so-called behavioural thermoregulation function made use of the heterogeneity of the thermal properties within an individual's habitat to sustain the animal's physiological processes. This function links the spatial utilization and distribution of individual ectotherm with the thermal properties of habitat (thermal habitat). In this study we modelled the relationship between the two by a spatial explicit model that simulates the movements of a lizard in a controlled environment. The model incorporates a lizard's transient body temperatures with a cellular automaton algorithm as a way to link the physiology knowledge of the animal with the spatial utilization of its microhabitat. On a larger spatial scale, 'thermal roughness' of the habitat was defined and used to predict the habitat occupancy of the target species. The results showed the habitat occupancy can be modelled by the cellular automaton based algorithm at a smaller scale, and can be modelled by the thermal roughness index at a larger scale.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frenkel, G.; Paterson, T.S.; Smith, M.E.

    The Institute for Defense Analyses (IDA) has collected and analyzed information on battle management algorithm technology that is relevant to Battle Management/Command, Control and Communications (BM/C3). This Memorandum Report represents a program plan that will provide the BM/C3 Directorate of the Strategic Defense Initiative Organization (SDIO) with administrative and technical insight into algorithm technology. This program plan focuses on current activity in algorithm development and provides information and analysis to the SDIO to be used in formulating budget requirements for FY 1988 and beyond. Based upon analysis of algorithm requirements and ongoing programs, recommendations have been made for research areasmore » that should be pursued, including both the continuation of current work and the initiation of new tasks. This final report includes all relevant material from interim reports as well as new results.« less

  15. On the performance of explicit and implicit algorithms for transient thermal analysis

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.

    1980-09-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.

  16. Bridging the gap between high-throughput genetic and transcriptional data reveals cellular pathways responding to alpha-synuclein toxicity

    PubMed Central

    Yeger-Lotem, Esti; Riva, Laura; Su, Linhui Julie; Gitler, Aaron D.; Cashikar, Anil; King, Oliver D.; Auluck, Pavan K.; Geddie, Melissa L.; Valastyan, Julie S.; Karger, David R.; Lindquist, Susan; Fraenkel, Ernest

    2009-01-01

    Cells respond to stimuli by changes in various processes, including signaling pathways and gene expression. Efforts to identify components of these responses increasingly depend on mRNA profiling and genetic library screens, yet the functional roles of the genes identified by these assays often remain enigmatic. By comparing the results of these two assays across various cellular responses, we found that they are consistently distinct. Moreover, genetic screens tend to identify response regulators, while mRNA profiling frequently detects metabolic responses. We developed an integrative approach that bridges the gap between these data using known molecular interactions, thus highlighting major response pathways. We harnessed this approach to reveal cellular pathways related to alpha-synuclein, a small lipid-binding protein implicated in several neurodegenerative disorders including Parkinson disease. For this we screened an established yeast model for alpha-synuclein toxicity to identify genes that when overexpressed alter cellular survival. Application of our algorithm to these data and data from mRNA profiling provided functional explanations for many of these genes and revealed novel relations between alpha-synuclein toxicity and basic cellular pathways. PMID:19234470

  17. ML-Space: Hybrid Spatial Gillespie and Particle Simulation of Multi-Level Rule-Based Models in Cell Biology.

    PubMed

    Bittig, Arne T; Uhrmacher, Adelinde M

    2017-01-01

    Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.

  18. A General Program for Item-Response Analysis That Employs the Stabilized Newton-Raphson Algorithm. Research Report. ETS RR-13-32

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2013-01-01

    A general program for item-response analysis is described that uses the stabilized Newton-Raphson algorithm. This program is written to be compliant with Fortran 2003 standards and is sufficiently general to handle independent variables, multidimensional ability parameters, and matrix sampling. The ability variables may be either polytomous or…

  19. Robust Constrained Blackbox Optimization with Surrogates

    DTIC Science & Technology

    2015-05-21

    algorithms with OPAL . Mathematical Programming Computation, 6(3):233–254, 2014. 6. M.S. Ouali, H. Aoudjit, and C. Audet. Replacement scheduling of a fleet of...Orban. Optimization of Algorithms with OPAL . Mathematical Programming Computation, 6(3), 233-254, September 2014. DISTRIBUTION A: Distribution

  20. CALIBRATION, OPTIMIZATION, AND SENSITIVITY AND UNCERTAINTY ALGORITHMS APPLICATION PROGRAMMING INTERFACE (COSU-API)

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...

  1. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  2. VAXELN Experimentation: Programming a Real-Time Periodic Task Dispatcher Using VAXELN Ada 1.1

    DTIC Science & Technology

    1987-11-01

    synchronization to the SQM and VAXELN semaphores. Based on real-time scheduling theory, the optimal rate-monotonic scheduling algorithm [Lui 73...schedulability test based on the rate-monotonic algorithm , namely task-lumping [Sha 871, was necessary to cal- culate the theoretically expected schedulability...8217 Guide Digital Equipment Corporation, Maynard, MA, 1986. [Lui 73] Liu, C.L., Layland, J.W. Scheduling Algorithms for Multi-programming in a Hard-Real-Time

  3. Operational Planning for Multiple Heterogeneous Unmanned Aerial Vehicles in Three Dimensions

    DTIC Science & Technology

    2009-06-01

    human input in the planning process. Two solution methods are presented: (1) a mixed-integer program, and (2) an algorithm that utilizes a metaheuristic ...and (2) an algorithm that utilizes a metaheuristic to generate composite variables for a linear program, called the Composite Operations Planning...that represent a path and an associated type of UAV. The reformulation is incorporated into an algorithm that uses a metaheuristic to generate the

  4. AutoBayes Program Synthesis System System Internals

    NASA Technical Reports Server (NTRS)

    Schumann, Johann Martin

    2011-01-01

    This lecture combines the theoretical background of schema based program synthesis with the hands-on study of a powerful, open-source program synthesis system (Auto-Bayes). Schema-based program synthesis is a popular approach toward program synthesis. The lecture will provide an introduction into this topic and discuss how this technology can be used to generate customized algorithms. The synthesis of advanced numerical algorithms requires the availability of a powerful symbolic (algebra) system. Its task is to symbolically solve equations, simplify expressions, or to symbolically calculate derivatives (among others) such that the synthesized algorithms become as efficient as possible. We will discuss the use and importance of the symbolic system for synthesis. Any synthesis system is a large and complex piece of code. In this lecture, we will study Autobayes in detail. AutoBayes has been developed at NASA Ames and has been made open source. It takes a compact statistical specification and generates a customized data analysis algorithm (in C/C++) from it. AutoBayes is written in SWI Prolog and many concepts from rewriting, logic, functional, and symbolic programming. We will discuss the system architecture, the schema libary and the extensive support infra-structure. Practical hands-on experiments and exercises will enable the student to get insight into a realistic program synthesis system and provides knowledge to use, modify, and extend Autobayes.

  5. An algorithm for the solution of dynamic linear programs

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1989-01-01

    The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation scheme.

  6. Array distribution in data-parallel programs

    NASA Technical Reports Server (NTRS)

    Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert; Sheffler, Thomas J.

    1994-01-01

    We consider distribution at compile time of the array data in a distributed-memory implementation of a data-parallel program written in a language like Fortran 90. We allow dynamic redistribution of data and define a heuristic algorithmic framework that chooses distribution parameters to minimize an estimate of program completion time. We represent the program as an alignment-distribution graph. We propose a divide-and-conquer algorithm for distribution that initially assigns a common distribution to each node of the graph and successively refines this assignment, taking computation, realignment, and redistribution costs into account. We explain how to estimate the effect of distribution on computation cost and how to choose a candidate set of distributions. We present the results of an implementation of our algorithms on several test problems.

  7. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications.

    PubMed

    Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul

    2016-09-09

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  8. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.

    PubMed

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system throughput performance.

  9. Attributed relational graphs for cell nucleus segmentation in fluorescence microscopy images.

    PubMed

    Arslan, Salim; Ersahin, Tulin; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem

    2013-06-01

    More rapid and accurate high-throughput screening in molecular cellular biology research has become possible with the development of automated microscopy imaging, for which cell nucleus segmentation commonly constitutes the core step. Although several promising methods exist for segmenting the nuclei of monolayer isolated and less-confluent cells, it still remains an open problem to segment the nuclei of more-confluent cells, which tend to grow in overlayers. To address this problem, we propose a new model-based nucleus segmentation algorithm. This algorithm models how a human locates a nucleus by identifying the nucleus boundaries and piecing them together. In this algorithm, we define four types of primitives to represent nucleus boundaries at different orientations and construct an attributed relational graph on the primitives to represent their spatial relations. Then, we reduce the nucleus identification problem to finding predefined structural patterns in the constructed graph and also use the primitives in region growing to delineate the nucleus borders. Working with fluorescence microscopy images, our experiments demonstrate that the proposed algorithm identifies nuclei better than previous nucleus segmentation algorithms.

  10. Efficient implementation of the 3D-DDA ray traversal algorithm on GPU and its application in radiation dose calculation.

    PubMed

    Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo

    2012-12-01

    The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.

  11. Solving Fractional Programming Problems based on Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Raouf, Osama Abdel; Hezam, Ibrahim M.

    2014-04-01

    This paper presents a new approach to solve Fractional Programming Problems (FPPs) based on two different Swarm Intelligence (SI) algorithms. The two algorithms are: Particle Swarm Optimization, and Firefly Algorithm. The two algorithms are tested using several FPP benchmark examples and two selected industrial applications. The test aims to prove the capability of the SI algorithms to solve any type of FPPs. The solution results employing the SI algorithms are compared with a number of exact and metaheuristic solution methods used for handling FPPs. Swarm Intelligence can be denoted as an effective technique for solving linear or nonlinear, non-differentiable fractional objective functions. Problems with an optimal solution at a finite point and an unbounded constraint set, can be solved using the proposed approach. Numerical examples are given to show the feasibility, effectiveness, and robustness of the proposed algorithm. The results obtained using the two SI algorithms revealed the superiority of the proposed technique among others in computational time. A better accuracy was remarkably observed in the solution results of the industrial application problems.

  12. Space shuttle propulsion estimation development verification, volume 1

    NASA Technical Reports Server (NTRS)

    Rogers, Robert M.

    1989-01-01

    The results of the Propulsion Estimation Development Verification are summarized. A computer program developed under a previous contract (NAS8-35324) was modified to include improved models for the Solid Rocket Booster (SRB) internal ballistics, the Space Shuttle Main Engine (SSME) power coefficient model, the vehicle dynamics using quaternions, and an improved Kalman filter algorithm based on the U-D factorized algorithm. As additional output, the estimated propulsion performances, for each device are computed with the associated 1-sigma bounds. The outputs of the estimation program are provided in graphical plots. An additional effort was expended to examine the use of the estimation approach to evaluate single engine test data. In addition to the propulsion estimation program PFILTER, a program was developed to produce a best estimate of trajectory (BET). The program LFILTER, also uses the U-D factorized algorithm form of the Kalman filter as in the propulsion estimation program PFILTER. The necessary definitions and equations explaining the Kalman filtering approach for the PFILTER program, the models used for this application for dynamics and measurements, program description, and program operation are presented.

  13. Activation of cellular death programs associated with immunosenescence-like phenotype in TPPII knockout mice

    PubMed Central

    Huai, Jisen; Firat, Elke; Nil, Ahmed; Million, Daniele; Gaedicke, Simone; Kanzler, Benoit; Freudenberg, Marina; van Endert, Peter; Kohler, Gabriele; Pahl, Heike L.; Aichele, Peter; Eichmann, Klaus; Niedermann, Gabriele

    2008-01-01

    The giant cytosolic protease tripeptidyl peptidase II (TPPII) has been implicated in the regulation of proliferation and survival of malignant cells, particularly lymphoma cells. To address its functions in normal cellular and systemic physiology we have generated TPPII-deficient mice. TPPII deficiency activates cell type-specific death programs, including proliferative apoptosis in several T lineage subsets and premature cellular senescence in fibroblasts and CD8+ T cells. This coincides with up-regulation of p53 and dysregulation of NF-κB. Prominent degenerative alterations at the organismic level were a decreased lifespan and symptoms characteristic of immunohematopoietic senescence. These symptoms include accelerated thymic involution, lymphopenia, impaired proliferative T cell responses, extramedullary hematopoiesis, and inflammation. Thus, TPPII is important for maintaining normal cellular and systemic physiology, which may be relevant for potential therapeutic applications of TPPII inhibitors. PMID:18362329

  14. A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction

    DOE PAGES

    Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...

    1995-01-01

    In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less

  15. Concurrent extensions to the FORTRAN language for parallel programming of computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Weeks, Cindy Lou

    1986-01-01

    Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.

  16. Javascript Library for Developing Interactive Micro-Level Animations for Teaching and Learning Algorithms on One-Dimensional Arrays

    ERIC Educational Resources Information Center

    Végh, Ladislav

    2016-01-01

    The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…

  17. High-Throughput Single-Cell RNA Sequencing and Data Analysis.

    PubMed

    Sagar; Herman, Josip Stefan; Pospisilik, John Andrew; Grün, Dominic

    2018-01-01

    Understanding biological systems at a single cell resolution may reveal several novel insights which remain masked by the conventional population-based techniques providing an average readout of the behavior of cells. Single-cell transcriptome sequencing holds the potential to identify novel cell types and characterize the cellular composition of any organ or tissue in health and disease. Here, we describe a customized high-throughput protocol for single-cell RNA-sequencing (scRNA-seq) combining flow cytometry and a nanoliter-scale robotic system. Since scRNA-seq requires amplification of a low amount of endogenous cellular RNA, leading to substantial technical noise in the dataset, downstream data filtering and analysis require special care. Therefore, we also briefly describe in-house state-of-the-art data analysis algorithms developed to identify cellular subpopulations including rare cell types as well as to derive lineage trees by ordering the identified subpopulations of cells along the inferred differentiation trajectories.

  18. Genome Scale Modeling in Systems Biology: Algorithms and Resources

    PubMed Central

    Najafi, Ali; Bidkhori, Gholamreza; Bozorgmehr, Joseph H.; Koch, Ina; Masoudi-Nejad, Ali

    2014-01-01

    In recent years, in silico studies and trial simulations have complemented experimental procedures. A model is a description of a system, and a system is any collection of interrelated objects; an object, moreover, is some elemental unit upon which observations can be made but whose internal structure either does not exist or is ignored. Therefore, any network analysis approach is critical for successful quantitative modeling of biological systems. This review highlights some of most popular and important modeling algorithms, tools, and emerging standards for representing, simulating and analyzing cellular networks in five sections. Also, we try to show these concepts by means of simple example and proper images and graphs. Overall, systems biology aims for a holistic description and understanding of biological processes by an integration of analytical experimental approaches along with synthetic computational models. In fact, biological networks have been developed as a platform for integrating information from high to low-throughput experiments for the analysis of biological systems. We provide an overview of all processes used in modeling and simulating biological networks in such a way that they can become easily understandable for researchers with both biological and mathematical backgrounds. Consequently, given the complexity of generated experimental data and cellular networks, it is no surprise that researchers have turned to computer simulation and the development of more theory-based approaches to augment and assist in the development of a fully quantitative understanding of cellular dynamics. PMID:24822031

  19. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  20. specsim: A Fortran-77 program for conditional spectral simulation in 3D

    NASA Astrophysics Data System (ADS)

    Yao, Tingting

    1998-12-01

    A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.

  1. Algorithmic Trading with Developmental and Linear Genetic Programming

    NASA Astrophysics Data System (ADS)

    Wilson, Garnett; Banzhaf, Wolfgang

    A developmental co-evolutionary genetic programming approach (PAM DGP) and a standard linear genetic programming (LGP) stock trading systemare applied to a number of stocks across market sectors. Both GP techniques were found to be robust to market fluctuations and reactive to opportunities associated with stock price rise and fall, with PAMDGP generating notably greater profit in some stock trend scenarios. Both algorithms were very accurate at buying to achieve profit and selling to protect assets, while exhibiting bothmoderate trading activity and the ability to maximize or minimize investment as appropriate. The content of the trading rules produced by both algorithms are also examined in relation to stock price trend scenarios.

  2. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  3. The program complex for vocal recognition

    NASA Astrophysics Data System (ADS)

    Konev, Anton; Kostyuchenko, Evgeny; Yakimuk, Alexey

    2017-01-01

    This article discusses the possibility of applying the algorithm of determining the pitch frequency for the note recognition problems. Preliminary study of programs-analogues were carried out for programs with function “recognition of the music”. The software package based on the algorithm for pitch frequency calculation was implemented and tested. It was shown that the algorithm allows recognizing the notes in the vocal performance of the user. A single musical instrument, a set of musical instruments, and a human voice humming a tune can be the sound source. The input file is initially presented in the .wav format or is recorded in this format from a microphone. Processing is performed by sequentially determining the pitch frequency and conversion of its values to the note. According to test results, modification of algorithms used in the complex was planned.

  4. Automata-Based Verification of Temporal Properties on Running Programs

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)

    2001-01-01

    This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  5. Algorithms for network-based identification of differential regulators from transcriptome data: a systematic evaluation

    PubMed Central

    Hui, YU; Ramkrishna, MITRA; Jing, YANG; YuanYuan, LI; ZhongMing, ZHAO

    2016-01-01

    Identification of differential regulators is critical to understand the dynamics of cellular systems and molecular mechanisms of diseases. Several computational algorithms have recently been developed for this purpose by using transcriptome and network data. However, it remains largely unclear which algorithm performs better under a specific condition. Such knowledge is important for both appropriate application and future enhancement of these algorithms. Here, we systematically evaluated seven main algorithms (TED, TDD, TFactS, RIF1, RIF2, dCSA_t2t, and dCSA_r2t), using both simulated and real datasets. In our simulation evaluation, we artificially inactivated either a single regulator or multiple regulators and examined how well each algorithm detected known gold standard regulators. We found that all these algorithms could effectively discern signals arising from regulatory network differences, indicating the validity of our simulation schema. Among the seven tested algorithms, TED and TFactS were placed first and second when both discrimination accuracy and robustness against data variation were considered. When applied to two independent lung cancer datasets, both TED and TFactS replicated a substantial fraction of their respective differential regulators. Since TED and TFactS rely on two distinct features of transcriptome data, namely differential co-expression and differential expression, both may be applied as mutual references during practical application. PMID:25326829

  6. CNN universal machine as classificaton platform: an art-like clustering algorithm.

    PubMed

    Bálya, David

    2003-12-01

    Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.

  7. Inferring nonlinear gene regulatory networks from gene expression data based on distance correlation.

    PubMed

    Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin

    2014-01-01

    Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference.

  8. Inferring Nonlinear Gene Regulatory Networks from Gene Expression Data Based on Distance Correlation

    PubMed Central

    Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin

    2014-01-01

    Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference. PMID:24551058

  9. A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search.

    PubMed

    Villagra, Andrea; Alba, Enrique; Leguizamón, Guillermo

    2016-01-01

    This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology.

  10. An Algorithm for Controlled Integration of Sound and Text.

    ERIC Educational Resources Information Center

    Wohlert, Harry S.; McCormick, Martin

    1985-01-01

    A serious drawback in introducing sound into computer programs for teaching foreign language speech has been the lack of an algorithm to turn off the cassette recorder immediately to keep screen text and audio in synchronization. This article describes a program which solves that problem. (SED)

  11. Soil moisture and temperature algorithms and validation

    USDA-ARS?s Scientific Manuscript database

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  12. Dynamic programming algorithms for biological sequence comparison.

    PubMed

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  13. Simulation of root forms using cellular automata model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winarno, Nanang, E-mail: nanang-winarno@upi.edu; Prima, Eka Cahya; Afifah, Ratih Mega Ayu

    This research aims to produce a simulation program for root forms using cellular automata model. Stephen Wolfram in his book entitled “A New Kind of Science” discusses the formation rules based on the statistical analysis. In accordance with Stephen Wolfram’s investigation, the research will develop a basic idea of computer program using Delphi 7 programming language. To best of our knowledge, there is no previous research developing a simulation describing root forms using the cellular automata model compared to the natural root form with the presence of stone addition as the disturbance. The result shows that (1) the simulation usedmore » four rules comparing results of the program towards the natural photographs and each rule had shown different root forms; (2) the stone disturbances prevent the root growth and the multiplication of root forms had been successfully modeled. Therefore, this research had added some stones, which have size of 120 cells placed randomly in the soil. Like in nature, stones cannot be penetrated by plant roots. The result showed that it is very likely to further develop the program of simulating root forms by 50 variations.« less

  14. Towards the prediction of essential genes by integration of network topology, cellular localization and biological process information

    PubMed Central

    2009-01-01

    Background The identification of essential genes is important for the understanding of the minimal requirements for cellular life and for practical purposes, such as drug design. However, the experimental techniques for essential genes discovery are labor-intensive and time-consuming. Considering these experimental constraints, a computational approach capable of accurately predicting essential genes would be of great value. We therefore present here a machine learning-based computational approach relying on network topological features, cellular localization and biological process information for prediction of essential genes. Results We constructed a decision tree-based meta-classifier and trained it on datasets with individual and grouped attributes-network topological features, cellular compartments and biological processes-to generate various predictors of essential genes. We showed that the predictors with better performances are those generated by datasets with integrated attributes. Using the predictor with all attributes, i.e., network topological features, cellular compartments and biological processes, we obtained the best predictor of essential genes that was then used to classify yeast genes with unknown essentiality status. Finally, we generated decision trees by training the J48 algorithm on datasets with all network topological features, cellular localization and biological process information to discover cellular rules for essentiality. We found that the number of protein physical interactions, the nuclear localization of proteins and the number of regulating transcription factors are the most important factors determining gene essentiality. Conclusion We were able to demonstrate that network topological features, cellular localization and biological process information are reliable predictors of essential genes. Moreover, by constructing decision trees based on these data, we could discover cellular rules governing essentiality. PMID:19758426

  15. Predicting mining activity with parallel genetic algorithms

    USGS Publications Warehouse

    Talaie, S.; Leigh, R.; Louis, S.J.; Raines, G.L.; Beyer, H.G.; O'Reilly, U.M.; Banzhaf, Arnold D.; Blum, W.; Bonabeau, C.; Cantu-Paz, E.W.; ,; ,

    2005-01-01

    We explore several different techniques in our quest to improve the overall model performance of a genetic algorithm calibrated probabilistic cellular automata. We use the Kappa statistic to measure correlation between ground truth data and data predicted by the model. Within the genetic algorithm, we introduce a new evaluation function sensitive to spatial correctness and we explore the idea of evolving different rule parameters for different subregions of the land. We reduce the time required to run a simulation from 6 hours to 10 minutes by parallelizing the code and employing a 10-node cluster. Our empirical results suggest that using the spatially sensitive evaluation function does indeed improve the performance of the model and our preliminary results also show that evolving different rule parameters for different regions tends to improve overall model performance. Copyright 2005 ACM.

  16. Supporting performance and configuration management of GTE cellular networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Ming; Lafond, C.; Jakobson, G.

    GTE Laboratories, in cooperation with GTE Mobilnet, has developed and deployed PERFFEX (PERFormance Expert), an intelligent system for performance and configuration management of cellular networks. PERFEX assists cellular network performance and radio engineers in the analysis of large volumes of cellular network performance and configuration data. It helps them locate and determine the probable causes of performance problems, and provides intelligent suggestions about how to correct them. The system combines an expert cellular network performance tuning capability with a map-based graphical user interface, data visualization programs, and a set of special cellular engineering tools. PERFEX is in daily use atmore » more than 25 GTE Mobile Switching Centers. Since the first deployment of the system in late 1993, PERFEX has become a major GTE cellular network performance optimization tool.« less

  17. An Introduction to Programming for Bioscientists: A Python-Based Primer

    PubMed Central

    Mura, Cameron

    2016-01-01

    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language’s usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a “variable,” the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences. PMID:27271528

  18. An Introduction to Programming for Bioscientists: A Python-Based Primer.

    PubMed

    Ekmekci, Berk; McAnany, Charles E; Mura, Cameron

    2016-06-01

    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a "variable," the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.

  19. Bellman's GAP--a language and compiler for dynamic programming in sequence analysis.

    PubMed

    Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert

    2013-03-01

    Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman's GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. In Bellman's GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman's GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman's GAP as an implementation platform of 'real-world' bioinformatics tools. Bellman's GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics.

  20. Genetic Algorithms and Nucleation in VIH-AIDS transition.

    NASA Astrophysics Data System (ADS)

    Barranon, Armando

    2003-03-01

    VIH to AIDS transition has been modeled via a genetic algorithm that uses boom-boom principle and where population evolution is simulated with a cellular automaton based on SIR model. VIH to AIDS transition is signed by nucleation of infected cells and low probability of infection are obtained for different mutation rates in agreement with clinical results. A power law is obtained with a critical exponent close to the critical exponent of cubic, spherical percolation, colossal magnetic resonance, Ising Model and liquid-gas phase transition in heavy ion collisions. Computations were carried out at UAM-A Supercomputing Lab and author acknowledges financial support from Division of CBI at UAM-A.

  1. Translational bioinformatics: linking the molecular world to the clinical world.

    PubMed

    Altman, R B

    2012-06-01

    Translational bioinformatics represents the union of translational medicine and bioinformatics. Translational medicine moves basic biological discoveries from the research bench into the patient-care setting and uses clinical observations to inform basic biology. It focuses on patient care, including the creation of new diagnostics, prognostics, prevention strategies, and therapies based on biological discoveries. Bioinformatics involves algorithms to represent, store, and analyze basic biological data, including DNA sequence, RNA expression, and protein and small-molecule abundance within cells. Translational bioinformatics spans these two fields; it involves the development of algorithms to analyze basic molecular and cellular data with an explicit goal of affecting clinical care.

  2. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  3. GCOM-W soil moisture and temperature algorithms and validation

    USDA-ARS?s Scientific Manuscript database

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  4. Variable-Metric Algorithm For Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Frick, James D.

    1989-01-01

    Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.

  5. Vega roll and attitude control system algorithms trade-off study

    NASA Astrophysics Data System (ADS)

    Paulino, N.; Cuciniello, G.; Cruciani, I.; Corraro, F.; Spallotta, D.; Nebula, F.

    2013-12-01

    This paper describes the trade-off study for the selection of the most suitable algorithms for the Roll and Attitude Control System (RACS) within the FPS-A program, aimed at developing the new Flight Program Software of VEGA Launcher. Two algorithms were analyzed: Switching Lines (SL) and Quaternion Feedback Regulation. Using a development simulation tool that models two critical flight phases (Long Coasting Phase (LCP) and Payload Release (PLR) Phase), both algorithms were assessed with Monte Carlo batch simulations for both of the phases. The statistical outcomes of the results demonstrate a 100 percent success rate for Quaternion Feedback Regulation, and support the choice of this method.

  6. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  7. Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs

    DOE PAGES

    Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; ...

    2016-04-02

    We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.

  8. Application of majority voting and consensus voting algorithms in N-version software

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.

    2018-05-01

    N-version programming is one of the most common techniques which is used to improve the reliability of software by building in fault tolerance, redundancy and decreasing common cause failures. N different equivalent software versions are developed by N different and isolated workgroups by considering the same software specifications. The versions solve the same task and return results that have to be compared to determine the correct result. Decisions of N different versions are evaluated by a voting algorithm or the so-called voter. In this paper, two of the most commonly used software voting algorithms such as the majority voting algorithm and the consensus voting algorithm are studied. The distinctive features of Nversion programming with majority voting and N-version programming with consensus voting are described. These two algorithms make a decision about the correct result on the base of the agreement matrix. However, if the equivalence relation on the agreement matrix is not satisfied it is impossible to make a decision. It is shown that the agreement matrix can be transformed into an appropriate form by using the Boolean compositions when the equivalence relation is satisfied.

  9. Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron

    2008-01-01

    In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.

  10. A ’Multiple Pivoting’ Algorithm for Goal-Interval Programming Formulations.

    DTIC Science & Technology

    1980-03-01

    jotso _P- ,- Research Report CCS 355 A "MULTIPLE PIVOTING" ALGORITHM FOR GOAL-INTERVAL PROGRAMMING FORMULATIONS by R. Armstrong* A. Charnes*W. Cook...J. Godfrey*** March 1980 *The University of Texas at Austin **York University, Downsview, Ontario, Canada ***Washington, DC This research was partly...areas. However, the main direction of goal programing research has been in formulating models instead of seeking procedures that would provide

  11. Mixed-Integer Nonconvex Quadratic Optimization Relaxations and Performance Analysis

    DTIC Science & Technology

    2016-10-11

    Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization,” (W. Bian, X. Chen, and Ye), Math Programming, 149 (2015) 301-327...Chen, Ge, Wang, Ye), Math Programming, 143 (1-2) (2014) 371-383. This paper resolved an important open question in cardinality constrained...Statistical Performance, and Algorithmic Theory for Local Solutions,” (H. Liu, T. Yao, R. Li, Y. Ye) manuscript, 2nd revision in Math Programming

  12. Engineering calculations for solving the orbital allotment problem

    NASA Technical Reports Server (NTRS)

    Reilly, C.; Walton, E. K.; Mount-Campbell, C.; Caldecott, R.; Aebker, E.; Mata, F.

    1988-01-01

    Four approaches for calculating downlink interferences for shaped-beam antennas are described. An investigation of alternative mixed-integer programming models for satellite synthesis is summarized. Plans for coordinating the various programs developed under this grant are outlined. Two procedures for ordering satellites to initialize the k-permutation algorithm are proposed. Results are presented for the k-permutation algorithms. Feasible solutions are found for 5 of the 6 problems considered. Finally, it is demonstrated that the k-permutation algorithm can be used to solve arc allotment problems.

  13. A Kind of Nonlinear Programming Problem Based on Mixed Fuzzy Relation Equations Constraints

    NASA Astrophysics Data System (ADS)

    Li, Jinquan; Feng, Shuang; Mi, Honghai

    In this work, a kind of nonlinear programming problem with non-differential objective function and under the constraints expressed by a system of mixed fuzzy relation equations is investigated. First, some properties of this kind of optimization problem are obtained. Then, a polynomial-time algorithm for this kind of optimization problem is proposed based on these properties. Furthermore, we show that this algorithm is optimal for the considered optimization problem in this paper. Finally, numerical examples are provided to illustrate our algorithms.

  14. Final evaluation report for the CAPITAL-ITS operational test and demonstration program

    DOT National Transportation Integrated Search

    1997-05-01

    The CAPITAL project was undertaken to assess the viability of using cellular-based traffic probes as a wide area vehicular traffic surveillance technique. From the test, cellular technology demonstrated the technical potential to provide vehicle spee...

  15. Cellular automata segmentation of the boundary between the compacta of vertebral bodies and surrounding structures

    NASA Astrophysics Data System (ADS)

    Egger, Jan; Nimsky, Christopher

    2016-03-01

    Due to the aging population, spinal diseases get more and more common nowadays; e.g., lifetime risk of osteoporotic fracture is 40% for white women and 13% for white men in the United States. Thus the numbers of surgical spinal procedures are also increasing with the aging population and precise diagnosis plays a vital role in reducing complication and recurrence of symptoms. Spinal imaging of vertebral column is a tedious process subjected to interpretation errors. In this contribution, we aim to reduce time and error for vertebral interpretation by applying and studying the GrowCut - algorithm for boundary segmentation between vertebral body compacta and surrounding structures. GrowCut is a competitive region growing algorithm using cellular automata. For our study, vertebral T2-weighted Magnetic Resonance Imaging (MRI) scans were first manually outlined by neurosurgeons. Then, the vertebral bodies were segmented in the medical images by a GrowCut-trained physician using the semi-automated GrowCut-algorithm. Afterwards, results of both segmentation processes were compared using the Dice Similarity Coefficient (DSC) and the Hausdorff Distance (HD) which yielded to a DSC of 82.99+/-5.03% and a HD of 18.91+/-7.2 voxel, respectively. In addition, the times have been measured during the manual and the GrowCut segmentations, showing that a GrowCutsegmentation - with an average time of less than six minutes (5.77+/-0.73) - is significantly shorter than a pure manual outlining.

  16. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing.

    PubMed

    Koprowski, Robert

    2014-07-04

    Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator's (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient's back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects - error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18% for the nose, 10% for the cheeks, and 7% for the forehead. Similarly, when: (7) measuring the anterior eye chamber - there is an error of 20%; (8) measuring the tooth enamel thickness - error of 15%; (9) evaluating the mechanical properties of the cornea during pressure measurement - error of 47%. The paper presents vital, selected issues occurring when assessing the accuracy of designed automatic algorithms for image analysis and processing in bioengineering. The impact of acquisition of images on the problems arising in their analysis has been shown on selected examples. It has also been indicated to which elements of image analysis and processing special attention should be paid in their design.

  17. Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Bergmann, E.; Weiler, P.

    1983-01-01

    An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.

  18. Strategic Control Algorithm Development : Volume 4A. Computer Program Report.

    DOT National Transportation Integrated Search

    1974-08-01

    A description of the strategic algorithm evaluation model is presented, both at the user and programmer levels. The model representation of an airport configuration, environmental considerations, the strategic control algorithm logic, and the airplan...

  19. Strategic Control Algorithm Development : Volume 4B. Computer Program Report (Concluded)

    DOT National Transportation Integrated Search

    1974-08-01

    A description of the strategic algorithm evaluation model is presented, both at the user and programmer levels. The model representation of an airport configuration, environmental considerations, the strategic control algorithm logic, and the airplan...

  20. Parallel transformation of K-SVD solar image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Youwen; Tian, Yu; Li, Mei

    2017-02-01

    The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.

  1. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  2. Modified Polar-Format Software for Processing SAR Data

    NASA Technical Reports Server (NTRS)

    Chen, Curtis

    2003-01-01

    HMPF is a computer program that implements a modified polar-format algorithm for processing data from spaceborne synthetic-aperture radar (SAR) systems. Unlike prior polar-format processing algorithms, this algorithm is based on the assumption that the radar signal wavefronts are spherical rather than planar. The algorithm provides for resampling of SAR pulse data from slant range to radial distance from the center of a reference sphere that is nominally the local Earth surface. Then, invoking the projection-slice theorem, the resampled pulse data are Fourier-transformed over radial distance, arranged in the wavenumber domain according to the acquisition geometry, resampled to a Cartesian grid, and inverse-Fourier-transformed. The result of this process is the focused SAR image. HMPF, and perhaps other programs that implement variants of the algorithm, may give better accuracy than do prior algorithms for processing strip-map SAR data from high altitudes and may give better phase preservation relative to prior polar-format algorithms for processing spotlight-mode SAR data.

  3. XTALOPT: An open-source evolutionary algorithm for crystal structure prediction

    NASA Astrophysics Data System (ADS)

    Lonie, David C.; Zurek, Eva

    2011-02-01

    The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely available to the scientific community for use and collaboration under the GNU Public License. Running time: User dependent. The program runs until stopped by the user.

  4. DEGAS: Dynamic Exascale Global Address Space Programming Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less

  5. Accurate multiple sequence-structure alignment of RNA sequences using combinatorial optimization.

    PubMed

    Bauer, Markus; Klau, Gunnar W; Reinert, Knut

    2007-07-27

    The discovery of functional non-coding RNA sequences has led to an increasing interest in algorithms related to RNA analysis. Traditional sequence alignment algorithms, however, fail at computing reliable alignments of low-homology RNA sequences. The spatial conformation of RNA sequences largely determines their function, and therefore RNA alignment algorithms have to take structural information into account. We present a graph-based representation for sequence-structure alignments, which we model as an integer linear program (ILP). We sketch how we compute an optimal or near-optimal solution to the ILP using methods from combinatorial optimization, and present results on a recently published benchmark set for RNA alignments. The implementation of our algorithm yields better alignments in terms of two published scores than the other programs that we tested: This is especially the case with an increasing number of input sequences. Our program LARA is freely available for academic purposes from http://www.planet-lisa.net.

  6. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    2000-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAFT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAFT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  7. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    1999-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAPT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAPT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  8. Cellular Plasticity and Heterogeneity of EGFR Mutant Lung Cancer

    DTIC Science & Technology

    2015-09-01

    AWARD NUMBER: W81XWH-14-1-0177 TITLE: Cellular Plasticity and Heterogeneity of EGFR Mutant Lung Cancer PRINCIPAL INVESTIGATOR: Katerina Politi...CONTRACT NUMBER Cellular Plasticity and Heterogeneity of EGFR Mutant Lung Cancer 5b. GRANT NUMBER W81XWH-14-1-0177 5c. PROGRAM ELEMENT NUMBER 6...Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Phenotypic changes have been observed in EGFR mutant lung cancers that become resistant to targeted

  9. LASSIE: simulating large-scale models of biochemical systems on GPUs.

    PubMed

    Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo

    2017-05-10

    Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE is able to perform fast simulations of even larger models, whereby the tested CPU-implementation of LSODA failed to reach termination. LASSIE is therefore expected to make an important breakthrough in Systems Biology applications, for the execution of faster and in-depth computational analyses of large-scale models of complex biological systems.

  10. Evaluation of a Text Compression Algorithm Against Computer-Aided Instruction (CAI) Material.

    ERIC Educational Resources Information Center

    Knight, Joseph M., Jr.

    This report describes the initial evaluation of a text compression algorithm against computer assisted instruction (CAI) material. A review of some concepts related to statistical text compression is followed by a detailed description of a practical text compression algorithm. A simulation of the algorithm was programed and used to obtain…

  11. Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation

    NASA Astrophysics Data System (ADS)

    Du, Jiaoman; Yu, Lean; Li, Xiang

    2016-04-01

    Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.

  12. Mechanisms of Undersensing by a Noise Detection Algorithm That Utilizes Far-Field Electrograms With Near-Field Bandpass Filtering.

    PubMed

    Koneru, Jayanthi N; Swerdlow, Charles D; Ploux, Sylvain; Sharma, Parikshit S; Kaszala, Karoly; Tan, Alex Y; Huizar, Jose F; Vijayaraman, Pugazhendi; Kenigsberg, David; Ellenbogen, Kenneth A

    2017-02-01

    Implantable cardioverter defibrillators (ICDs) must establish a balance between delivering appropriate shocks for ventricular tachyarrhythmias and withholding inappropriate shocks for lead-related oversensing ("noise"). To improve the specificity of ICD therapy, manufacturers have developed proprietary algorithms that detect lead noise. The SecureSense TM RV Lead Noise discrimination (St. Jude Medical, St. Paul, MN, USA) algorithm is designed to differentiate oversensing due to lead failure from ventricular tachyarrhythmias and withhold therapies in the presence of sustained lead-related oversensing. We report 5 patients in whom appropriate ICD therapy was withheld due to the operation of the SecureSense algorithm and explain the mechanism for inhibition of therapy in each case. Limitations of algorithms designed to increase ICD therapy specificity, especially for the SecureSense algorithm, are analyzed. The SecureSense algorithm can withhold appropriate therapies for ventricular arrhythmias due to design and programming limitations. Electrophysiologists should have a thorough understanding of the SecureSense algorithm before routinely programming it and understand the implications for ventricular arrhythmia misclassification. © 2016 Wiley Periodicals, Inc.

  13. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    PubMed

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  14. Runtime Analysis of Linear Temporal Logic Specifications

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus

    2001-01-01

    This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  15. To Be or Not to Be: Controlling Cellular Suicide | Center for Cancer Research

    Cancer.gov

    When a cell is damaged and can no longer function properly, a complex series of molecular steps is triggered that allows it to die in a controlled manner. This cellular suicide is called programmed cell death, or apoptosis.

  16. Experiences in using the CYBER 203 for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1982-01-01

    In this paper, the authors report on some of their experiences modifying two three-dimensional transonic flow programs (FLO22 and FLO27) for use on the NASA Langley Research Center CYBER 203. Both of the programs discussed were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine, including: (1) leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, (2) vectorizing parts of the existing algorithm in the program, and (3) incorporating a new vectorizable algorithm (ZEBRA I or ZEBRA II) in the program.

  17. Clinically oriented device programming in bradycardia patients: part 1 (sinus node disease). Proposals from AIAC (Italian Association of Arrhythmology and Cardiac Pacing).

    PubMed

    Ziacchi, Matteo; Palmisano, Pietro; Biffi, Mauro; Ricci, Renato P; Landolina, Maurizio; Zoni-Berisso, Massimo; Occhetta, Eraldo; Maglia, Giampiero; Botto, Gianluca; Padeletti, Luigi; Boriani, Giuseppe

    2018-04-01

    : Modern pacemakers have an increasing number of programable parameters and specific algorithms designed to optimize pacing therapy in relation to the individual characteristics of patients. When choosing the most appropriate pacemaker type and programing, the following variables must be taken into account: the type of bradyarrhythmia at the time of pacemaker implantation; the cardiac chamber requiring pacing, and the percentage of pacing actually needed to correct the rhythm disorder; the possible association of multiple rhythm disturbances and conduction diseases; the evolution of conduction disorders during follow-up. The goals of device programing are to preserve or restore the heart rate response to metabolic and hemodynamic demands; to maintain physiological conduction; to maximize device longevity; to detect, prevent, and treat atrial arrhythmia. In patients with sinus node disease, the optimal pacing mode is DDDR. Based on all the available evidence, in this setting, we consider appropriate the activation of the following algorithms: rate responsive function in patients with chronotropic incompetence; algorithms to maximize intrinsic atrioventricular conduction in the absence of atrioventricular blocks; mode-switch algorithms; algorithms for autoadaptive management of the atrial pacing output; algorithms for the prevention and treatment of atrial tachyarrhythmias in the subgroup of patients with atrial tachyarrhythmias/atrial fibrillation. The purpose of this two-part consensus document is to provide specific suggestions (based on an extensive literature review) on appropriate pacemaker setting in relation to patients' clinical features.

  18. Secondary Education Students' Difficulties in Algorithmic Problems with Arrays: An Analysis Using the SOLO Taxonomy

    ERIC Educational Resources Information Center

    Vrachnos, Euripides; Jimoyiannis, Athanassios

    2017-01-01

    Developing students' algorithmic and computational thinking is currently a major objective for primary and secondary education in many countries around the globe. Literature suggests that students face at various difficulties in programming processes, because of their mental models about basic programming constructs. Arrays constitute the first…

  19. Parallel computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhr, L.

    1987-01-01

    This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.

  20. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 2: Mission payloads subsystem description

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The scheduling algorithm for mission planning and logistics evaluation (SAMPLE) is presented. Two major subsystems are included: The mission payloads program; and the set covering program. Formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.

  1. Numerical Modeling of One-Dimensional Steady-State Flow and Contaminant Transport in a Horizontally Heterogeneous Unconfined Aquifer with an Uneven Base

    EPA Science Inventory

    Algorithms and a short description of the D1_Flow program for numerical modeling of one-dimensional steady-state flow in horizontally heterogeneous aquifers with uneven sloping bases are presented. The algorithms are based on the Dupuit-Forchheimer approximations. The program per...

  2. Validation of the GCOM-W SCA and JAXA soil moisture algorithms

    USDA-ARS?s Scientific Manuscript database

    Satellite-based remote sensing of soil moisture has matured over the past decade as a result of the Global Climate Observing Mission-Water (GCOM-W) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  3. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    ERIC Educational Resources Information Center

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  4. An Optimal Algorithm towards Successive Location Privacy in Sensor Networks with Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Zhao, Baokang; Wang, Dan; Shao, Zili; Cao, Jiannong; Chan, Keith C. C.; Su, Jinshu

    In wireless sensor networks, preserving location privacy under successive inference attacks is extremely critical. Although this problem is NP-complete in general cases, we propose a dynamic programming based algorithm and prove it is optimal in special cases where the correlation only exists between p immediate adjacent observations.

  5. Refraction law and Fermat principle: a project using the ant colony optimization algorithm for undergraduate students in physics

    NASA Astrophysics Data System (ADS)

    Vuong, Q. L.; Rigaut, C.; Gossuin, Y.

    2018-07-01

    A programming project for undergraduate students in physics is proposed in this work. Its goal is to check the Snell–Descartes law of refraction using the Fermat principle and the ant colony optimization algorithm. The project involves basic mathematics and physics and is adapted to students with basic programming skills. More advanced tools can be used (but are not mandatory) as parallelization or object-oriented programming, which makes the project also suitable for more experienced students. We propose two tests to validate the program. Our algorithm is able to find solutions which are close to the theoretical predictions. Two quantities are defined to study its convergence and the quality of the solutions. It is also shown that the choice of the values of the simulation parameters is important to efficiently obtain precise results.

  6. An effective algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2012-08-01

    Numerical values of the Chandrasekhar function are needed with high accuracy in evaluations of theoretical models describing electron transport in condensed matter. An algorithm for such calculations should be possibly fast and also accurate, e.g. an accuracy of 10 decimal digits is needed for some applications. Two of the integral representations of the Chandrasekhar function are prospective for constructing such an algorithm, but suitable transformations are needed to obtain a rapidly converging quadrature. A mixed algorithm is proposed in which the Chandrasekhar function is calculated from two algorithms, depending on the value of one of the arguments. Catalogue identifier: AEMC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 567 No. of bytes in distributed program, including test data, etc.: 4444 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a FORTRAN 90 compiler Operating system: Linux, Windows 7, Windows XP RAM: 0.6 Mb Classification: 2.4, 7.2 Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two algorithms and by selecting ranges of the argument ω in which performance is the fastest. Restrictions: Two input parameters for the Chandrasekhar function, x and ω (notation used in the code), are restricted to the range: 0⩽x⩽1 and 0⩽ω⩽1, which is sufficient in numerous applications. Unusual features: The program uses the Romberg quadrature for integration. This quadrature is applicable to integrands that satisfy several requirements (the integrand does not vary rapidly and does not change sign in the integration interval; furthermore, the integrand is finite at the endpoints). Consequently, the analyzed integrands were transformed so that these requirements were satisfied. In effect, one can conveniently control the accuracy of integration. Although the desired fractional accuracy was set at 10-10, the obtained accuracy of the Chandrasekhar function was much higher, typically 13 decimal places. Running time: Between 0.7 and 5 milliseconds for one pair of arguments of the Chandrasekhar function.

  7. Use of CYBER 203 and CYBER 205 computers for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1983-01-01

    Experiences are discussed for modifying two three-dimensional transonic flow computer programs (FLO 22 and FLO 27) for use on the CDC CYBER 203 computer system. Both programs were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine: leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, vectorizing parts of the existing algorithm in the program, and incorporating a vectorizable algorithm (ZEBRA I or ZEBRA II) in the program. Comparison runs of the programs were made on CDC CYBER 175. CYBER 203, and two pipe CDC CYBER 205 computer systems.

  8. Automated glycopeptide analysis—review of current state and future directions

    PubMed Central

    Dallas, David C.; Martin, William F.; Hua, Serenus

    2013-01-01

    Glycosylation of proteins is involved in immune defense, cell–cell adhesion, cellular recognition and pathogen binding and is one of the most common and complex post-translational modifications. Science is still struggling to assign detailed mechanisms and functions to this form of conjugation. Even the structural analysis of glycoproteins—glycoproteomics—remains in its infancy due to the scarcity of high-throughput analytical platforms capable of determining glycopeptide composition and structure, especially platforms for complex biological mixtures. Glycopeptide composition and structure can be determined with high mass-accuracy mass spectrometry, particularly when combined with chromatographic separation, but the sheer volume of generated data necessitates computational software for interpretation. This review discusses the current state of glycopeptide assignment software—advances made to date and issues that remain to be addressed. The various software and algorithms developed so far provide important insights into glycoproteomics. However, there is currently no freely available software that can analyze spectral data in batch and unambiguously determine glycopeptide compositions for N- and O-linked glycopeptides from relevant biological sources such as human milk and serum. Few programs are capable of aiding in structural determination of the glycan component. To significantly advance the field of glycoproteomics, analytical software and algorithms are required that: (i) solve for both N- and O-linked glycopeptide compositions, structures and glycosites in biological mixtures; (ii) are high-throughput and process data in batches; (iii) can interpret mass spectral data from a variety of sources and (iv) are open source and freely available. PMID:22843980

  9. Reliability analysis of laminated CMC components through shell subelement techniques

    NASA Technical Reports Server (NTRS)

    Starlinger, A.; Duffy, S. F.; Gyekenyesi, J. P.

    1992-01-01

    An updated version of the integrated design program C/CARES (composite ceramic analysis and reliability evaluation of structures) was developed for the reliability evaluation of CMC laminated shell components. The algorithm is now split in two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The new interface program from the finite-element code MARC also includes the option of using hybrid laminates and allows for variations in temperature fields throughout the component.

  10. STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations

    NASA Technical Reports Server (NTRS)

    Shah, S. N.

    1981-01-01

    The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.

  11. Determining the distribution of probes between different subcellular locations through automated unmixing of subcellular patterns.

    PubMed

    Peng, Tao; Bonamy, Ghislain M C; Glory-Afshar, Estelle; Rines, Daniel R; Chanda, Sumit K; Murphy, Robert F

    2010-02-16

    Many proteins or other biological macromolecules are localized to more than one subcellular structure. The fraction of a protein in different cellular compartments is often measured by colocalization with organelle-specific fluorescent markers, requiring availability of fluorescent probes for each compartment and acquisition of images for each in conjunction with the macromolecule of interest. Alternatively, tailored algorithms allow finding particular regions in images and quantifying the amount of fluorescence they contain. Unfortunately, this approach requires extensive hand-tuning of algorithms and is often cell type-dependent. Here we describe a machine-learning approach for estimating the amount of fluorescent signal in different subcellular compartments without hand tuning, requiring only the acquisition of separate training images of markers for each compartment. In testing on images of cells stained with mixtures of probes for different organelles, we achieved a 93% correlation between estimated and expected amounts of probes in each compartment. We also demonstrated that the method can be used to quantify drug-dependent protein translocations. The method enables automated and unbiased determination of the distributions of protein across cellular compartments, and will significantly improve imaging-based high-throughput assays and facilitate proteome-scale localization efforts.

  12. 2D photonic crystal complete band gap search using a cyclic cellular automaton refination

    NASA Astrophysics Data System (ADS)

    González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.

    2014-11-01

    We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.

  13. Optimal tracking control for a class of nonlinear discrete-time systems with time delays based on heuristic dynamic programming.

    PubMed

    Zhang, Huaguang; Song, Ruizhuo; Wei, Qinglai; Zhang, Tieyan

    2011-12-01

    In this paper, a novel heuristic dynamic programming (HDP) iteration algorithm is proposed to solve the optimal tracking control problem for a class of nonlinear discrete-time systems with time delays. The novel algorithm contains state updating, control policy iteration, and performance index iteration. To get the optimal states, the states are also updated. Furthermore, the "backward iteration" is applied to state updating. Two neural networks are used to approximate the performance index function and compute the optimal control policy for facilitating the implementation of HDP iteration algorithm. At last, we present two examples to demonstrate the effectiveness of the proposed HDP iteration algorithm.

  14. Performance evaluation of power control algorithms in wireless cellular networks

    NASA Astrophysics Data System (ADS)

    Temaneh-Nyah, C.; Iita, V.

    2014-10-01

    Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.

  15. ePMV embeds molecular modeling into professional animation software environments.

    PubMed

    Johnson, Graham T; Autin, Ludovic; Goodsell, David S; Sanner, Michel F; Olson, Arthur J

    2011-03-09

    Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties, and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. ePMV Embeds Molecular Modeling into Professional Animation Software Environments

    PubMed Central

    Johnson, Graham T.; Autin, Ludovic; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.

    2011-01-01

    SUMMARY Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers, we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. PMID:21397181

  17. A comprehensive literature review of haplotyping software and methods for use with unrelated individuals.

    PubMed

    Salem, Rany M; Wessel, Jennifer; Schork, Nicholas J

    2005-03-01

    Interest in the assignment and frequency analysis of haplotypes in samples of unrelated individuals has increased immeasurably as a result of the emphasis placed on haplotype analyses by, for example, the International HapMap Project and related initiatives. Although there are many available computer programs for haplotype analysis applicable to samples of unrelated individuals, many of these programs have limitations and/or very specific uses. In this paper, the key features of available haplotype analysis software for use with unrelated individuals, as well as pooled DNA samples from unrelated individuals, are summarised. Programs for haplotype analysis were identified through keyword searches on PUBMED and various internet search engines, a review of citations from retrieved papers and personal communications, up to June 2004. Priority was given to functioning computer programs, rather than theoretical models and methods. The available software was considered in light of a number of factors: the algorithm(s) used, algorithm accuracy, assumptions, the accommodation of genotyping error, implementation of hypothesis testing, handling of missing data, software characteristics and web-based implementations. Review papers comparing specific methods and programs are also summarised. Forty-six haplotyping programs were identified and reviewed. The programs were divided into two groups: those designed for individual genotype data (a total of 43 programs) and those designed for use with pooled DNA samples (a total of three programs). The accuracy of programs using various criteria are assessed and the programs are categorised and discussed in light of: algorithm and method, accuracy, assumptions, genotyping error, hypothesis testing, missing data, software characteristics and web implementation. Many available programs have limitations (eg some cannot accommodate missing data) and/or are designed with specific tasks in mind (eg estimating haplotype frequencies rather than assigning most likely haplotypes to individuals). It is concluded that the selection of an appropriate haplotyping program for analysis purposes should be guided by what is known about the accuracy of estimation, as well as by the limitations and assumptions built into a program.

  18. A Comparative Study of Optimization Algorithms for Engineering Synthesis.

    DTIC Science & Technology

    1983-03-01

    the ADS program demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular...demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular problem. 4 TABLE OF...algorithm to suit a particular problem. The ADS library of design optimization algorithms was . developed by Vanderplaats in response to the first

  19. Observations on Student Misconceptions--A Case Study of the Build-Heap Algorithm

    ERIC Educational Resources Information Center

    Seppala, Otto; Malmi, Lauri; Korhonen, Ari

    2006-01-01

    Data structures and algorithms are core issues in computer programming. However, learning them is challenging for most students and many of them have various types of misconceptions on how algorithms work. In this study, we discuss the problem of identifying misconceptions on the principles of how algorithms work. Our context is algorithm…

  20. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  1. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less

  2. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    NASA Technical Reports Server (NTRS)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  3. Visual Inference Programming

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter

    2002-01-01

    The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.

  4. Interior point techniques for LP and NLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evtushenko, Y.

    By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.

  5. Clinically oriented device programming in bradycardia patients: part 2 (atrioventricular blocks and neurally mediated syncope). Proposals from AIAC (Italian Association of Arrhythmology and Cardiac Pacing).

    PubMed

    Palmisano, Pietro; Ziacchi, Matteo; Biffi, Mauro; Ricci, Renato P; Landolina, Maurizio; Zoni-Berisso, Massimo; Occhetta, Eraldo; Maglia, Giampiero; Botto, Gianluca; Padeletti, Luigi; Boriani, Giuseppe

    2018-04-01

    : The purpose of this two-part consensus document is to provide specific suggestions (based on an extensive literature review) on appropriate pacemaker setting in relation to patients' clinical features. In part 2, criteria for pacemaker choice and programming in atrioventricular blocks and neurally mediate syncope are proposed. The atrioventricular blocks can be paroxysmal or persistent, isolated or associated with sinus node disease. Neurally mediated syncope can be related to carotid sinus syndrome or cardioinhibitory vasovagal syncope. In sinus rhythm, with persistent atrioventricular block, we considered appropriate the activation of mode-switch algorithms, and algorithms for auto-adaptive management of the ventricular pacing output. If the atrioventricular block is paroxysmal, in addition to algorithms mentioned above, algorithms to maximize intrinsic atrioventricular conduction should be activated. When sinus node disease is associated with atrioventricular block, the activation of rate-responsive function in patients with chronotropic incompetence is appropriate. In permanent atrial fibrillation with atrioventricular block, algorithms for auto-adaptive management of the ventricular pacing output should be activated. If the atrioventricular block is persistent, the activation of rate-responsive function is appropriate. In carotid sinus syndrome, adequate rate hysteresis should be programmed. In vasovagal syncope, specialized sensing and pacing algorithms designed for reflex syncope prevention should be activated.

  6. Impaired ATP6V0A2 expression contributes to Golgi dispersion and glycosylation changes in senescent cells.

    PubMed

    Udono, Miyako; Fujii, Kaoru; Harada, Gakuro; Tsuzuki, Yumi; Kadooka, Keishi; Zhang, Pingbo; Fujii, Hiroshi; Amano, Maho; Nishimura, Shin-Ichiro; Tashiro, Kosuke; Kuhara, Satoru; Katakura, Yoshinori

    2015-11-27

    Many genes and signaling pathways have been found to be involved in cellular senescence program. In the present study, we have identified 16 senescence-associated genes by differential proteomic analysis of the normal human diploid fibroblast cell line, TIG-1, and focused on ATP6V0A2. The aim of this study is to clarify the role of ATP6V0A2, the causal gene for ARCL2, a syndrome of abnormal glycosylation and impaired Golgi trafficking, in cellular senescence program. Here we showed that ATP6V0A2 is critical for cellular senescence; impaired expression of ATP6V0A2 disperses the Golgi structure and triggers senescence, suggesting that ATP6V0A2 mediates these processes. FITC-lectin staining and glycoblotting revealed significantly different glycosylation structures in presenescent (young) and senescent (old) TIG-1 cells; reducing ATP6V0A2 expression in young TIG-1 cells yielded structures similar to those in old TIG-1 cells. Our results suggest that senescence-associated impaired expression of ATP6V0A2 triggers changes in Golgi structure and glycosylation in old TIG-1 cells, which demonstrates a role of ATP6V0A2 in cellular senescence program.

  7. Impaired ATP6V0A2 expression contributes to Golgi dispersion and glycosylation changes in senescent cells

    PubMed Central

    Udono, Miyako; Fujii, Kaoru; Harada, Gakuro; Tsuzuki, Yumi; Kadooka, Keishi; Zhang, Pingbo; Fujii, Hiroshi; Amano, Maho; Nishimura, Shin-Ichiro; Tashiro, Kosuke; Kuhara, Satoru; Katakura, Yoshinori

    2015-01-01

    Many genes and signaling pathways have been found to be involved in cellular senescence program. In the present study, we have identified 16 senescence-associated genes by differential proteomic analysis of the normal human diploid fibroblast cell line, TIG-1, and focused on ATP6V0A2. The aim of this study is to clarify the role of ATP6V0A2, the causal gene for ARCL2, a syndrome of abnormal glycosylation and impaired Golgi trafficking, in cellular senescence program. Here we showed that ATP6V0A2 is critical for cellular senescence; impaired expression of ATP6V0A2 disperses the Golgi structure and triggers senescence, suggesting that ATP6V0A2 mediates these processes. FITC-lectin staining and glycoblotting revealed significantly different glycosylation structures in presenescent (young) and senescent (old) TIG-1 cells; reducing ATP6V0A2 expression in young TIG-1 cells yielded structures similar to those in old TIG-1 cells. Our results suggest that senescence-associated impaired expression of ATP6V0A2 triggers changes in Golgi structure and glycosylation in old TIG-1 cells, which demonstrates a role of ATP6V0A2 in cellular senescence program. PMID:26611489

  8. E-Learning Technologies: Employing Matlab Web Server to Facilitate the Education of Mathematical Programming

    ERIC Educational Resources Information Center

    Karagiannis, P.; Markelis, I.; Paparrizos, K.; Samaras, N.; Sifaleras, A.

    2006-01-01

    This paper presents new web-based educational software (webNetPro) for "Linear Network Programming." It includes many algorithms for "Network Optimization" problems, such as shortest path problems, minimum spanning tree problems, maximum flow problems and other search algorithms. Therefore, webNetPro can assist the teaching process of courses such…

  9. Research on trust-region algorithms for nonlinear programming. Final technical report, 1 January 1990--31 December 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, J.E. Jr.; Tapia, R.A.

    Goal of the research was to develop and test effective, robust algorithms for general nonlinear programming (NLP) problems, particularly large or otherwise expensive NLP problems. We discuss the research conducted over the 3-year period Jan. 1990-Dec. 1992. We also describe current and future directions of our research.

  10. GCALIGNER 1.0: an alignment program to compute a multiple sample comparison data matrix from large eco-chemical datasets obtained by GC.

    PubMed

    Dellicour, Simon; Lecocq, Thomas

    2013-10-01

    GCALIGNER 1.0 is a computer program designed to perform a preliminary data comparison matrix of chemical data obtained by GC without MS information. The alignment algorithm is based on the comparison between the retention times of each detected compound in a sample. In this paper, we test the GCALIGNER efficiency on three datasets of the chemical secretions of bumble bees. The algorithm performs the alignment with a low error rate (<3%). GCALIGNER 1.0 is a useful, simple and free program based on an algorithm that enables the alignment of table-type data from GC. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. 75 FR 51280 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-19

    ... Special Emphasis Panel; Member Conflict: Cellular and Molecular Aspects of Neurodevelopment. Date... Group; Cellular and Molecular Immunology--A Study Section. Date: September 30-October 1, 2010. Time: [email protected] . (Catalogue of Federal Domestic Assistance Program Nos. 93.306, Comparative Medicine; 93.333...

  12. Field-Programmable Gate Array Computer in Structural Analysis: An Initial Exploration

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C., Jr.; Sobieszczanski-Sobieski, Jaroslaw; Brown, Samuel

    2002-01-01

    This paper reports on an initial assessment of using a Field-Programmable Gate Array (FPGA) computational device as a new tool for solving structural mechanics problems. A FPGA is an assemblage of binary gates arranged in logical blocks that are interconnected via software in a manner dependent on the algorithm being implemented and can be reprogrammed thousands of times per second. In effect, this creates a computer specialized for the problem that automatically exploits all the potential for parallel computing intrinsic in an algorithm. This inherent parallelism is the most important feature of the FPGA computational environment. It is therefore important that if a problem offers a choice of different solution algorithms, an algorithm of a higher degree of inherent parallelism should be selected. It is found that in structural analysis, an 'analog computer' style of programming, which solves problems by direct simulation of the terms in the governing differential equations, yields a more favorable solution algorithm than current solution methods. This style of programming is facilitated by a 'drag-and-drop' graphic programming language that is supplied with the particular type of FPGA computer reported in this paper. Simple examples in structural dynamics and statics illustrate the solution approach used. The FPGA system also allows linear scalability in computing capability. As the problem grows, the number of FPGA chips can be increased with no loss of computing efficiency due to data flow or algorithmic latency that occurs when a single problem is distributed among many conventional processors that operate in parallel. This initial assessment finds the FPGA hardware and software to be in their infancy in regard to the user conveniences; however, they have enormous potential for shrinking the elapsed time of structural analysis solutions if programmed with algorithms that exhibit inherent parallelism and linear scalability. This potential warrants further development of FPGA-tailored algorithms for structural analysis.

  13. Spatial cluster detection using dynamic programming.

    PubMed

    Sverchkov, Yuriy; Jiang, Xia; Cooper, Gregory F

    2012-03-25

    The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm.

  14. Spatial cluster detection using dynamic programming

    PubMed Central

    2012-01-01

    Background The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. Methods We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. Results When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. Conclusions We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm. PMID:22443103

  15. Encryption and display of multiple-image information using computer-generated holography with modified GS iterative algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua

    2018-03-01

    In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.

  16. An Arbitrary First Order Theory Can Be Represented by a Program: A Theorem

    NASA Technical Reports Server (NTRS)

    Hosheleva, Olga

    1997-01-01

    How can we represent knowledge inside a computer? For formalized knowledge, classical logic seems to be the most adequate tool. Classical logic is behind all formalisms of classical mathematics, and behind many formalisms used in Artificial Intelligence. There is only one serious problem with classical logic: due to the famous Godel's theorem, classical logic is algorithmically undecidable; as a result, when the knowledge is represented in the form of logical statements, it is very difficult to check whether, based on this statement, a given query is true or not. To make knowledge representations more algorithmic, a special field of logic programming was invented. An important portion of logic programming is algorithmically decidable. To cover knowledge that cannot be represented in this portion, several extensions of the decidable fragments have been proposed. In the spirit of logic programming, these extensions are usually introduced in such a way that even if a general algorithm is not available, good heuristic methods exist. It is important to check whether the already proposed extensions are sufficient, or further extensions is necessary. In the present paper, we show that one particular extension, namely, logic programming with classical negation, introduced by M. Gelfond and V. Lifschitz, can represent (in some reasonable sense) an arbitrary first order logical theory.

  17. A synthetic genetic edge detection program.

    PubMed

    Tabor, Jeffrey J; Salis, Howard M; Simpson, Zachary Booth; Chevalier, Aaron A; Levskaya, Anselm; Marcotte, Edward M; Voigt, Christopher A; Ellington, Andrew D

    2009-06-26

    Edge detection is a signal processing algorithm common in artificial intelligence and image recognition programs. We have constructed a genetically encoded edge detection algorithm that programs an isogenic community of E. coli to sense an image of light, communicate to identify the light-dark edges, and visually present the result of the computation. The algorithm is implemented using multiple genetic circuits. An engineered light sensor enables cells to distinguish between light and dark regions. In the dark, cells produce a diffusible chemical signal that diffuses into light regions. Genetic logic gates are used so that only cells that sense light and the diffusible signal produce a positive output. A mathematical model constructed from first principles and parameterized with experimental measurements of the component circuits predicts the performance of the complete program. Quantitatively accurate models will facilitate the engineering of more complex biological behaviors and inform bottom-up studies of natural genetic regulatory networks.

  18. Insulin algorithms in the self-management of insulin-dependent diabetes: the interactive 'Apple Juice' program.

    PubMed

    Williams, A G

    1996-01-01

    The 'Apple Juice' program is an interactive diabetes self-management program which runs on a lap-top Macintosh Powerbook 100 computer. The dose-by-dose insulin advisory program was initially designed for children with insulin-dependent (type 1) diabetes mellitus. It utilizes several different insulin algorithms, measurement formulae, and compensation factors for meals, activity, medication and the dawn phenomenon. It was developed to assist the individual with diabetes and/or care providers, in determining specific insulin dosage recommendations throughout a 24 h period. Information technology functions include, but are not limited to automated record keeping, data recall, event reminders, data trend/pattern analyses and education. This paper highlights issues, observations and recommendations surrounding the use of the current version of the software, along with a detailed description of the insulin algorithms and measurement formulae applied successfully with the author's daughter over a six year period.

  19. A Synthetic Genetic Edge Detection Program

    PubMed Central

    Tabor, Jeffrey J.; Salis, Howard; Simpson, Zachary B.; Chevalier, Aaron A.; Levskaya, Anselm; Marcotte, Edward M.; Voigt, Christopher A.; Ellington, Andrew D.

    2009-01-01

    Summary Edge detection is a signal processing algorithm common in artificial intelligence and image recognition programs. We have constructed a genetically encoded edge detection algorithm that programs an isogenic community of E.coli to sense an image of light, communicate to identify the light-dark edges, and visually present the result of the computation. The algorithm is implemented using multiple genetic circuits. An engineered light sensor enables cells to distinguish between light and dark regions. In the dark, cells produce a diffusible chemical signal that diffuses into light regions. Genetic logic gates are used so that only cells that sense light and the diffusible signal produce a positive output. A mathematical model constructed from first principles and parameterized with experimental measurements of the component circuits predicts the performance of the complete program. Quantitatively accurate models will facilitate the engineering of more complex biological behaviors and inform bottom-up studies of natural genetic regulatory networks. PMID:19563759

  20. An interactive approach based on a discrete differential evolution algorithm for a class of integer bilevel programming problems

    NASA Astrophysics Data System (ADS)

    Li, Hong; Zhang, Li; Jiao, Yong-Chang

    2016-07-01

    This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.

  1. Flight Evaluation of an Aircraft with Side and Center Stick Controllers and Rate-Limited Ailerons

    NASA Technical Reports Server (NTRS)

    Deppe, P. R.; Chalk, C. R.; Shafer, M. F.

    1996-01-01

    As part of an ongoing government and industry effort to study the flying qualities of aircraft with rate-limited control surface actuators, two studies were previously flown to examine an algorithm developed to reduce the tendency for pilot-induced oscillation when rate limiting occurs. This algorithm, when working properly, greatly improved the performance of the aircraft in the first study. In the second study, however, the algorithm did not initially offer as much improvement. The differences between the two studies caused concern. The study detailed in this paper was performed to determine whether the performance of the algorithm was affected by the characteristics of the cockpit controllers. Time delay and flight control system noise were also briefly evaluated. An in-flight simulator, the Calspan Learjet 25, was programmed with a low roll actuator rate limit, and the algorithm was programmed into the flight control system. Side- and center-stick controllers, force and position command signals, a rate-limited feel system, a low-frequency feel system, and a feel system damper were evaluated. The flight program consisted of four flights and 38 evaluations of test configurations. Performance of the algorithm was determined to be unaffected by using side- or center-stick controllers or force or position command signals. The rate-limited feel system performed as well as the rate-limiting algorithm but was disliked by the pilots. The low-frequency feel system and the feel system damper were ineffective. Time delay and noise were determined to degrade the performance of the algorithm.

  2. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    PubMed

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  3. Verification of Numerical Programs: From Real Numbers to Floating Point Numbers

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn E.; Munoz, Cesar; Kirchner, Florent; Correnson, Loiec

    2013-01-01

    Numerical algorithms lie at the heart of many safety-critical aerospace systems. The complexity and hybrid nature of these systems often requires the use of interactive theorem provers to verify that these algorithms are logically correct. Usually, proofs involving numerical computations are conducted in the infinitely precise realm of the field of real numbers. However, numerical computations in these algorithms are often implemented using floating point numbers. The use of a finite representation of real numbers introduces uncertainties as to whether the properties veri ed in the theoretical setting hold in practice. This short paper describes work in progress aimed at addressing these concerns. Given a formally proven algorithm, written in the Program Verification System (PVS), the Frama-C suite of tools is used to identify sufficient conditions and verify that under such conditions the rounding errors arising in a C implementation of the algorithm do not affect its correctness. The technique is illustrated using an algorithm for detecting loss of separation among aircraft.

  4. Hull Form Design and Optimization Tool Development

    DTIC Science & Technology

    2012-07-01

    global minimum. The algorithm accomplishes this by using a method known as metaheuristics which allows the algorithm to examine a large area by...further development of these tools including the implementation and testing of a new optimization algorithm , the improvement of a rapid hull form...under the 2012 Naval Research Enterprise Intern Program. 15. SUBJECT TERMS hydrodynamic, hull form, generation, optimization, algorithm

  5. The implement of Talmud property allocation algorithm based on graphic point-segment way

    NASA Astrophysics Data System (ADS)

    Cen, Haifeng

    2017-04-01

    Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.

  6. New Funding Opportunity from the Human Biomolecular Atlas Program (HuBMAP)! | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    The NIH Common Fund Human Biomolecular Atlas Program (HuBMAP) aims to develop a framework for functional mapping the human body with cellular resolution to enhance our understanding of cellular organization-function. HuBMAP will accelerate the development of the next generation of tools and techniques to generate 3D tissue maps using validated high-content, high-throughput imaging and omics assays, and establish an open data platform for integrating, visualizing data to build multi-dimensional maps.

  7. Wallerian demyelination: chronicle of a cellular cataclysm.

    PubMed

    Tricaud, Nicolas; Park, Hwan Tae

    2017-11-01

    Wallerian demyelination is characteristic of peripheral nerve degeneration after traumatic injury. After axonal degeneration, the myelinated Schwann cell undergoes a stereotypical cellular program that results in the disintegration of the myelin sheath, a process termed demyelination. In this review, we chronologically describe this program starting from the late and visible features of myelin destruction and going backward to the initial molecular steps that trigger the nuclear reprogramming few hours after injury. Wallerian demyelination is a wonderful model for myelin degeneration occurring in the diverse forms of demyelinating peripheral neuropathies that plague human beings.

  8. Laser direct writing of combinatorial libraries of idealized cellular constructs: Biomedical applications

    NASA Astrophysics Data System (ADS)

    Schiele, Nathan R.; Koppes, Ryan A.; Corr, David T.; Ellison, Karen S.; Thompson, Deanna M.; Ligon, Lee A.; Lippert, Thomas K. M.; Chrisey, Douglas B.

    2009-03-01

    The ability to control cell placement and to produce idealized cellular constructs is essential for understanding and controlling intercellular processes and ultimately for producing engineered tissue replacements. We have utilized a novel intra-cavity variable aperture excimer laser operated at 193 nm to reproducibly direct write mammalian cells with micrometer resolution to form a combinatorial array of idealized cellular constructs. We deposited patterns of human dermal fibroblasts, mouse myoblasts, rat neural stem cells, human breast cancer cells, and bovine pulmonary artery endothelial cells to study aspects of collagen network formation, breast cancer progression, and neural stem cell proliferation, respectively. Mammalian cells were deposited by matrix assisted pulsed laser evaporation direct write from ribbons comprised of a UV transparent quartz coated with either a thin layer of extracellular matrix or triazene as a dynamic release layer using CAD/CAM control. We demonstrate that through optical imaging and incorporation of a machine vision algorithm, specific cells on the ribbon can be laser deposited in spatial coherence with respect to geometrical arrays and existing cells on the receiving substrate. Having the ability to direct write cells into idealized cellular constructs can help to answer many biomedical questions and advance tissue engineering and cancer research.

  9. A quantitative image cytometry technique for time series or population analyses of signaling networks.

    PubMed

    Ozaki, Yu-ichi; Uda, Shinsuke; Saito, Takeshi H; Chung, Jaehoon; Kubota, Hiroyuki; Kuroda, Shinya

    2010-04-01

    Modeling of cellular functions on the basis of experimental observation is increasingly common in the field of cellular signaling. However, such modeling requires a large amount of quantitative data of signaling events with high spatio-temporal resolution. A novel technique which allows us to obtain such data is needed for systems biology of cellular signaling. We developed a fully automatable assay technique, termed quantitative image cytometry (QIC), which integrates a quantitative immunostaining technique and a high precision image-processing algorithm for cell identification. With the aid of an automated sample preparation system, this device can quantify protein expression, phosphorylation and localization with subcellular resolution at one-minute intervals. The signaling activities quantified by the assay system showed good correlation with, as well as comparable reproducibility to, western blot analysis. Taking advantage of the high spatio-temporal resolution, we investigated the signaling dynamics of the ERK pathway in PC12 cells. The QIC technique appears as a highly quantitative and versatile technique, which can be a convenient replacement for the most conventional techniques including western blot, flow cytometry and live cell imaging. Thus, the QIC technique can be a powerful tool for investigating the systems biology of cellular signaling.

  10. Optical monitoring of thermal effects in RPE during heating

    NASA Astrophysics Data System (ADS)

    Schuele, G.; Huie, Ph.; Yellachich, D.; Molnar, F. E.; O'Conell-Rodwell, C.; Vitkin, E.; Perelman, L. T.; Palanker, D.

    2005-04-01

    Fast and non-invasive detection of cellular stress is useful for fundamental research and practical applications in medicine and biology. Using Light Scattering Spectroscopy we extract information about changes in refractive index and size of the cellular organelles. Particle sizes down to 50nm in diameter can be detected using light within the spectral range of 450-850 nm. We monitor the heat-induced sub-cellular structural changes in human RPE cells and, for comparison, in transfected NIH-3T3 cells which express luciferase linked to the heat shock protein (HSP). Using inverse light scattering fitting algorithm, we reconstruct the size distribution of the sub-micron organelles from the light scattering spectrum. The most significant (up to 70%) and rapid (20sec) temperature-related changes can be linked to an increase of refractive index of the 160nm sized mitochondria. The start of this effect coincides with the onset of HSP expression. This technique provides an insight into metabolic processes within organelles larger than 50nm without exogenous staining and opens doors for non-invasive real-time assessment of cellular stress, which can be used for monitoring of retinal laser treatments like transpupillary thermo therapy or PDT.

  11. A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search

    PubMed Central

    Alba, Enrique; Leguizamón, Guillermo

    2016-01-01

    This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology. PMID:27403153

  12. Accurately tracking single-cell movement trajectories in microfluidic cell sorting devices.

    PubMed

    Jeong, Jenny; Frohberg, Nicholas J; Zhou, Enlu; Sulchek, Todd; Qiu, Peng

    2018-01-01

    Microfluidics are routinely used to study cellular properties, including the efficient quantification of single-cell biomechanics and label-free cell sorting based on the biomechanical properties, such as elasticity, viscosity, stiffness, and adhesion. Both quantification and sorting applications require optimal design of the microfluidic devices and mathematical modeling of the interactions between cells, fluid, and the channel of the device. As a first step toward building such a mathematical model, we collected video recordings of cells moving through a ridged microfluidic channel designed to compress and redirect cells according to cell biomechanics. We developed an efficient algorithm that automatically and accurately tracked the cell trajectories in the recordings. We tested the algorithm on recordings of cells with different stiffness, and showed the correlation between cell stiffness and the tracked trajectories. Moreover, the tracking algorithm successfully picked up subtle differences of cell motion when passing through consecutive ridges. The algorithm for accurately tracking cell trajectories paves the way for future efforts of modeling the flow, forces, and dynamics of cell properties in microfluidics applications.

  13. Probabilistic representation of gene regulatory networks.

    PubMed

    Mao, Linyong; Resat, Haluk

    2004-09-22

    Recent experiments have established unambiguously that biological systems can have significant cell-to-cell variations in gene expression levels even in isogenic populations. Computational approaches to studying gene expression in cellular systems should capture such biological variations for a more realistic representation. In this paper, we present a new fully probabilistic approach to the modeling of gene regulatory networks that allows for fluctuations in the gene expression levels. The new algorithm uses a very simple representation for the genes, and accounts for the repression or induction of the genes and for the biological variations among isogenic populations simultaneously. Because of its simplicity, introduced algorithm is a very promising approach to model large-scale gene regulatory networks. We have tested the new algorithm on the synthetic gene network library bioengineered recently. The good agreement between the computed and the experimental results for this library of networks, and additional tests, demonstrate that the new algorithm is robust and very successful in explaining the experimental data. The simulation software is available upon request. Supplementary material will be made available on the OUP server.

  14. New robust algorithm for tracking cells in videos of Drosophila morphogenesis based on finding an ideal path in segmented spatio-temporal cellular structures.

    PubMed

    Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal

    2011-01-01

    In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.

  15. 76 FR 57066 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    ... personal privacy. Name of Committee: Molecular, Cellular and Developmental Neuroscience Integrated Review, Group, Cellular and Molecular Biology of Glia Study Section. Date: October 14, 2011. Time: 8 a.m. to 7... Domestic Assistance Program Nos. 93.306, Comparative Medicine; 93.333, Clinical Research, 93.306, 93.333...

  16. Profiling pleural effusion cells by a diffraction imaging method

    NASA Astrophysics Data System (ADS)

    Al-Qaysi, Safaa; Hong, Heng; Wen, Yuhua; Lu, Jun Q.; Feng, Yuanming; Hu, Xin-Hua

    2018-02-01

    Assay of cells in pleural effusion (PE) is an important means of disease diagnosis. Conventional cytology of effusion samples, however, has low sensitivity and depends heavily on the expertise of cytopathologists. We applied a polarization diffraction imaging flow cytometry method on effusion cells to investigate their features. Diffraction imaging of the PE cell samples has been performed on 6000 to 12000 cells for each effusion cell sample of three patients. After prescreening to remove images by cellular debris and aggregated non-cellular particles, the image textures were extracted with a gray level co-occurrence matrix (GLCM) algorithm. The distribution of the imaged cells in the GLCM parameters space was analyzed by a Gaussian Mixture Model (GMM) to determine the number of clusters among the effusion cells. These results yield insight on textural features of diffraction images and related cellular morphology in effusion samples and can be used toward the development of a label-free method for effusion cells assay.

  17. Methodological considerations for global analysis of cellular FLIM/FRET measurements

    NASA Astrophysics Data System (ADS)

    Adbul Rahim, Nur Aida; Pelet, Serge; Kamm, Roger D.; So, Peter T. C.

    2012-02-01

    Global algorithms can improve the analysis of fluorescence energy transfer (FRET) measurement based on fluorescence lifetime microscopy. However, global analysis of FRET data is also susceptible to experimental artifacts. This work examines several common artifacts and suggests remedial experimental protocols. Specifically, we examined the accuracy of different methods for instrument response extraction and propose an adaptive method based on the mean lifetime of fluorescent proteins. We further examined the effects of image segmentation and a priori constraints on the accuracy of lifetime extraction. Methods to test the applicability of global analysis on cellular data are proposed and demonstrated. The accuracy of global fitting degrades with lower photon count. By systematically tracking the effect of the minimum photon count on lifetime and FRET prefactors when carrying out global analysis, we demonstrate a correction procedure to recover the correct FRET parameters, allowing us to obtain protein interaction information even in dim cellular regions with photon counts as low as 100 per decay curve.

  18. Computer-Aided Structural Engineering (CASE) Project: Investigation and Design of U-Frame Structures Using Program CUFRBC. Volume C. User’s Guide for Channels

    DTIC Science & Technology

    1990-05-01

    1988) or ACI 318-83 (1983). Actual calculations for section strength are made using subroutines taken from the CASE program CSTR (Hamby and Price...validity of the design of their par- ticular structure. Thus, it is essential that the user of the program under- stand the design algorithm included...modes. However, several restrictions were placed on the design mode to avoid unnecessary com- plications of the design algorithm for cases rarely

  19. Research on numerical method for multiple pollution source discharge and optimal reduction program

    NASA Astrophysics Data System (ADS)

    Li, Mingchang; Dai, Mingxin; Zhou, Bin; Zou, Bin

    2018-03-01

    In this paper, the optimal method for reduction program is proposed by the nonlinear optimal algorithms named that genetic algorithm. The four main rivers in Jiangsu province, China are selected for reducing the environmental pollution in nearshore district. Dissolved inorganic nitrogen (DIN) is studied as the only pollutant. The environmental status and standard in the nearshore district is used to reduce the discharge of multiple river pollutant. The research results of reduction program are the basis of marine environmental management.

  20. An algorithm for automated layout of process description maps drawn in SBGN.

    PubMed

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Evolving technology has increased the focus on genomics. The combination of today's advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  1. An algorithm for automated layout of process description maps drawn in SBGN

    PubMed Central

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Motivation: Evolving technology has increased the focus on genomics. The combination of today’s advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. Results: We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. Availability and implementation: An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). Contact: ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26363029

  2. A unified algorithm for predicting partition coefficients for PBPK modeling of drugs and environmental chemicals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peyret, Thomas; Poulin, Patrick; Krishnan, Kannan, E-mail: kannan.krishnan@umontreal.ca

    The algorithms in the literature focusing to predict tissue:blood PC (P{sub tb}) for environmental chemicals and tissue:plasma PC based on total (K{sub p}) or unbound concentration (K{sub pu}) for drugs differ in their consideration of binding to hemoglobin, plasma proteins and charged phospholipids. The objective of the present study was to develop a unified algorithm such that P{sub tb}, K{sub p} and K{sub pu} for both drugs and environmental chemicals could be predicted. The development of the unified algorithm was accomplished by integrating all mechanistic algorithms previously published to compute the PCs. Furthermore, the algorithm was structured in such amore » way as to facilitate predictions of the distribution of organic compounds at the macro (i.e. whole tissue) and micro (i.e. cells and fluids) levels. The resulting unified algorithm was applied to compute the rat P{sub tb}, K{sub p} or K{sub pu} of muscle (n = 174), liver (n = 139) and adipose tissue (n = 141) for acidic, neutral, zwitterionic and basic drugs as well as ketones, acetate esters, alcohols, aliphatic hydrocarbons, aromatic hydrocarbons and ethers. The unified algorithm reproduced adequately the values predicted previously by the published algorithms for a total of 142 drugs and chemicals. The sensitivity analysis demonstrated the relative importance of the various compound properties reflective of specific mechanistic determinants relevant to prediction of PC values of drugs and environmental chemicals. Overall, the present unified algorithm uniquely facilitates the computation of macro and micro level PCs for developing organ and cellular-level PBPK models for both chemicals and drugs.« less

  3. Direct evaluation of fault trees using object-oriented programming techniques

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1989-01-01

    Object-oriented programming techniques are used in an algorithm for the direct evaluation of fault trees. The algorithm combines a simple bottom-up procedure for trees without repeated events with a top-down recursive procedure for trees with repeated events. The object-oriented approach results in a dynamic modularization of the tree at each step in the reduction process. The algorithm reduces the number of recursive calls required to solve trees with repeated events and calculates intermediate results as well as the solution of the top event. The intermediate results can be reused if part of the tree is modified. An example is presented in which the results of the algorithm implemented with conventional techniques are compared to those of the object-oriented approach.

  4. Dynamic programming on a shared-memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Edmonds, Phil; Chu, Eleanor; George, Alan

    1993-01-01

    Three new algorithms for solving dynamic programming problems on a shared-memory parallel computer are described. All three algorithms attempt to balance work load, while keeping synchronization cost low. In particular, for a multiprocessor having p processors, an analysis of the best algorithm shows that the arithmetic cost is O(n-cubed/6p) and that the synchronization cost is O(absolute value of log sub C n) if p much less than n, where C = (2p-1)/(2p + 1) and n is the size of the problem. The low synchronization cost is important for machines where synchronization is expensive. Analysis and experiments show that the best algorithm is effective in balancing the work load and producing high efficiency.

  5. FINITE-STATE APPROXIMATIONS TO DENUMERABLE-STATE DYNAMIC PROGRAMS,

    DTIC Science & Technology

    AIR FORCE OPERATIONS, LOGISTICS), (*INVENTORY CONTROL, DYNAMIC PROGRAMMING), (*DYNAMIC PROGRAMMING, APPROXIMATION(MATHEMATICS)), INVENTORY CONTROL, DECISION MAKING, STOCHASTIC PROCESSES, GAME THEORY, ALGORITHMS, CONVERGENCE

  6. Spiral: Automated Computing for Linear Transforms

    NASA Astrophysics Data System (ADS)

    Püschel, Markus

    2010-09-01

    Writing fast software has become extraordinarily difficult. For optimal performance, programs and their underlying algorithms have to be adapted to take full advantage of the platform's parallelism, memory hierarchy, and available instruction set. To make things worse, the best implementations are often platform-dependent and platforms are constantly evolving, which quickly renders libraries obsolete. We present Spiral, a domain-specific program generation system for important functionality used in signal processing and communication including linear transforms, filters, and other functions. Spiral completely replaces the human programmer. For a desired function, Spiral generates alternative algorithms, optimizes them, compiles them into programs, and intelligently searches for the best match to the computing platform. The main idea behind Spiral is a mathematical, declarative, domain-specific framework to represent algorithms and the use of rewriting systems to generate and optimize algorithms at a high level of abstraction. Experimental results show that the code generated by Spiral competes with, and sometimes outperforms, the best available human-written code.

  7. A dynamic programming-based particle swarm optimization algorithm for an inventory management problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao

    2013-07-01

    This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.

  8. The MHOST finite element program: 3-D inelastic analysis methods for hot section components. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Nakazawa, Shohei

    1991-01-01

    Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.

  9. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  10. Classifying elementary cellular automata using compressibility, diversity and sensitivity measures

    NASA Astrophysics Data System (ADS)

    Ninagawa, Shigeru; Adamatzky, Andrew

    2014-10-01

    An elementary cellular automaton (ECA) is a one-dimensional, synchronous, binary automaton, where each cell update depends on its own state and states of its two closest neighbors. We attempt to uncover correlations between the following measures of ECA behavior: compressibility, sensitivity and diversity. The compressibility of ECA configurations is calculated using the Lempel-Ziv (LZ) compression algorithm LZ78. The sensitivity of ECA rules to initial conditions and perturbations is evaluated using Derrida coefficients. The generative morphological diversity shows how many different neighborhood states are produced from a single nonquiescent cell. We found no significant correlation between sensitivity and compressibility. There is a substantial correlation between generative diversity and compressibility. Using sensitivity, compressibility and diversity, we uncover and characterize novel groupings of rules.

  11. New method for predicting estrogen receptor status utilizing breast MRI texture kinetic analysis

    NASA Astrophysics Data System (ADS)

    Chaudhury, Baishali; Hall, Lawrence O.; Goldgof, Dmitry B.; Gatenby, Robert A.; Gillies, Robert; Drukteinis, Jennifer S.

    2014-03-01

    Magnetic Resonance Imaging (MRI) of breast cancer typically shows that tumors are heterogeneous with spatial variations in blood flow and cell density. Here, we examine the potential link between clinical tumor imaging and the underlying evolutionary dynamics behind heterogeneity in the cellular expression of estrogen receptors (ER) in breast cancer. We assume, in an evolutionary environment, that ER expression will only occur in the presence of significant concentrations of estrogen, which is delivered via the blood stream. Thus, we hypothesize, the expression of ER in breast cancer cells will correlate with blood flow on gadolinium enhanced breast MRI. To test this hypothesis, we performed quantitative analysis of blood flow on dynamic contrast enhanced MRI (DCE-MRI) and correlated it with the ER status of the tumor. Here we present our analytic methods, which utilize a novel algorithm to analyze 20 volumetric DCE-MRI breast cancer tumors. The algorithm generates post initial enhancement (PIE) maps from DCE-MRI and then performs texture features extraction from the PIE map, feature selection, and finally classification of tumors into ER positive and ER negative status. The combined gray level co-occurrence matrices, gray level run length matrices and local binary pattern histogram features allow quantification of breast tumor heterogeneity. The algorithm predicted ER expression with an accuracy of 85% using a Naive Bayes classifier in leave-one-out cross-validation. Hence, we conclude that our data supports the hypothesis that imaging characteristics can, through application of evolutionary principles, provide insights into the cellular and molecular properties of cancer cells.

  12. Algorithm for solving of two-level hierarchical minimax program control problem of final state the regional socio-economic system in the presence of risks

    NASA Astrophysics Data System (ADS)

    Shorikov, A. F.

    2017-10-01

    In this paper we study the problem of optimization of guaranteed result for program control by the final state of regional social and economic system in the presence of risks. For this problem we propose a mathematical model in the form of two-level hierarchical minimax program control problem of the final state of this process with incomplete information. For solving of its problem we constructed the common algorithm that has a form of a recurrent procedure of solving a linear programming and a finite optimization problems.

  13. Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Linderoth

    2011-11-06

    the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.

  14. An electron tomography algorithm for reconstructing 3D morphology using surface tangents of projected scattering interfaces

    NASA Astrophysics Data System (ADS)

    Petersen, T. C.; Ringer, S. P.

    2010-03-01

    Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular range. The algorithm does not solve the tomographic back-projection problem but rather reconstructs the local 3D morphology of surfaces defined by varied scattering densities. Solution method: Reconstruction using differential geometry applied to image analysis computations. Restrictions: The code has only been tested with square images and has been developed for only single-axis tilting. Running time: For high quality reconstruction, 5-15 min

  15. Enhanced sensitivity of CpG island search and primer design based on predicted CpG island position.

    PubMed

    Park, Hyun-Chul; Ahn, Eu-Ree; Jung, Ju Yeon; Park, Ji-Hye; Lee, Jee Won; Lim, Si-Keun; Kim, Won

    2018-05-01

    DNA methylation has important biological roles, such as gene expression regulation, as well as practical applications in forensics, such as in body fluid identification and age estimation. DNA methylation often occurs in the CpG site, and methylation within the CpG islands affects various cellular functions and is related to tissue-specific identification. Several programs have been developed to identify CpG islands; however, the size, location, and number of predicted CpG islands are not identical due to different search algorithms. In addition, they only provide structural information for predicted CpG islands without experimental information, such as primer design. We developed an analysis pipeline package, CpGPNP, to integrate CpG island prediction and primer design. CpGPNP predicts CpG islands more accurately and sensitively than other programs, and designs primers easily based on the predicted CpG island locations. The primer design function included standard, bisulfite, and methylation-specific PCR to identify the methylation of particular CpG sites. In this study, we performed CpG island prediction on all chromosomes and compared CpG island search performance of CpGPNP with other CpG island prediction programs. In addition, we compared the position of primers designed for a specific region within the predicted CpG island using other bisulfite PCR primer programs. The primers designed by CpGPNP were used to experimentally verify the amplification of the target region of markers for body fluid identification and age estimation. CpGPNP is freely available at http://forensicdna.kr/cpgpnp/. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  17. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  18. Image reconstruction and scan configurations enabled by optimization-based algorithms in multispectral CT

    NASA Astrophysics Data System (ADS)

    Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan

    2017-11-01

    Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.

  19. Hand-Held Calculator Algorithms for Coastal Engineering.

    DTIC Science & Technology

    1982-01-01

    and water depth at the structure toe, ds. The development of the equation is derived on the solution sheet included with program 104R. Algorithm uses...Limited Design Breaking Wave Height at Structure (AOS logic)... .... ....... ......... .54 6. 105R Wave Transmission - Fuchs’ Equation (RPN logic...58 105A Wave Transmission - Fuchs’ Equation (AOS logic). . . . 61 APPENDIX BLANK PROGRAM FORMS ........ ....................... ... 67 4

  20. Adaptable Binary Programs

    DTIC Science & Technology

    1994-04-01

    a variation of Ziv - Lempel compression [ZL77]. We found that using a standard compression algorithm rather than semantic compression allowed simplified...mentation. In Proceedings of the Conference on Programming Language Design and Implementation, 1993. (ZL77] J. Ziv and A. Lempel . A universal algorithm ...required by adaptable binaries. Our ABS stores adaptable binary information using the conventional binary symbol table and compresses this data using

  1. Diagnosing Learners' Problem-Solving Strategies Using Learning Environments with Algorithmic Problems in Secondary Education

    ERIC Educational Resources Information Center

    Kiesmuller, Ulrich

    2009-01-01

    At schools special learning and programming environments are often used in the field of algorithms. Particularly with regard to computer science lessons in secondary education, they are supposed to help novices to learn the basics of programming. In several parts of Germany (e.g., Bavaria) these fundamentals are taught as early as in the seventh…

  2. Application of a Dynamic Programming Algorithm for Weapon Target Assignment

    DTIC Science & Technology

    2016-02-01

    25] A . Turan , “Techniques for the Allocation of Resources Under Uncertainty,” Middle Eastern Technical University, Ankara, Turkey, 2012. [26] K...UNCLASSIFIED UNCLASSIFIED Application of a Dynamic Programming Algorithm for Weapon Target Assignment Lloyd Hammond Weapons and...optimisation techniques to support the decision making process. This report documents the methodology used to identify, develop and assess a

  3. Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.

    PubMed

    Berry, K J; Mielke, P W

    2000-12-01

    Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.

  4. A Rewriting-Based Approach to Trace Analysis

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present a rewriting-based algorithm for efficiently evaluating future time Linear Temporal Logic (LTL) formulae on finite execution traces online. While the standard models of LTL are infinite traces, finite traces appear naturally when testing and/or monitoring red applications that only run for limited time periods. The presented algorithm is implemented in the Maude executable specification language and essentially consists of a set of equations establishing an executable semantics of LTL using a simple formula transforming approach. The algorithm is further improved to build automata on-the-fly from formulae, using memoization. The result is a very efficient and small Maude program that can be used to monitor program executions. We furthermore present an alternative algorithm for synthesizing probably minimal observer finite state machines (or automata) from LTL formulae, which can be used to analyze execution traces without the need for a rewriting system, and can hence be used by observers written in conventional programming languages. The presented work is part of an ambitious runtime verification and monitoring project at NASA Ames, called PATHEXPLORER, and demonstrates that rewriting can be a tractable and attractive means for experimenting and implementing program monitoring logics.

  5. Dinucleotide controlled null models for comparative RNA gene prediction.

    PubMed

    Gesell, Tanja; Washietl, Stefan

    2008-05-27

    Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz is available as open source C code that can be compiled for every major platform and downloaded here: http://sourceforge.net/projects/sissiz.

  6. scEpath: Energy landscape-based inference of transition probabilities and cellular trajectories from single-cell transcriptomic data.

    PubMed

    Jin, Suoqin; MacLean, Adam L; Peng, Tao; Nie, Qing

    2018-02-05

    Single-cell RNA-sequencing (scRNA-seq) offers unprecedented resolution for studying cellular decision-making processes. Robust inference of cell state transition paths and probabilities is an important yet challenging step in the analysis of these data. Here we present scEpath, an algorithm that calculates energy landscapes and probabilistic directed graphs in order to reconstruct developmental trajectories. We quantify the energy landscape using "single-cell energy" and distance-based measures, and find that the combination of these enables robust inference of the transition probabilities and lineage relationships between cell states. We also identify marker genes and gene expression patterns associated with cell state transitions. Our approach produces pseudotemporal orderings that are - in combination - more robust and accurate than current methods, and offers higher resolution dynamics of the cell state transitions, leading to new insight into key transition events during differentiation and development. Moreover, scEpath is robust to variation in the size of the input gene set, and is broadly unsupervised, requiring few parameters to be set by the user. Applications of scEpath led to the identification of a cell-cell communication network implicated in early human embryo development, and novel transcription factors important for myoblast differentiation. scEpath allows us to identify common and specific temporal dynamics and transcriptional factor programs along branched lineages, as well as the transition probabilities that control cell fates. A MATLAB package of scEpath is available at https://github.com/sqjin/scEpath. qnie@uci.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.

  7. Advanced detection, isolation, and accommodation of sensor failures in turbofan engines: Real-time microcomputer implementation

    NASA Technical Reports Server (NTRS)

    Delaat, John C.; Merrill, Walter C.

    1990-01-01

    The objective of the Advanced Detection, Isolation, and Accommodation Program is to improve the overall demonstrated reliability of digital electronic control systems for turbine engines. For this purpose, an algorithm was developed which detects, isolates, and accommodates sensor failures by using analytical redundancy. The performance of this algorithm was evaluated on a real time engine simulation and was demonstrated on a full scale F100 turbofan engine. The real time implementation of the algorithm is described. The implementation used state-of-the-art microprocessor hardware and software, including parallel processing and high order language programming.

  8. Who's Who in Biology.

    ERIC Educational Resources Information Center

    Norman, Colin

    1983-01-01

    Provides top-rated programs (by university) in biochemistry, botany, cellular/molecular biology, microbiology, physiology, and zoology. Overall scores included with each program were obtained from 1,848 biologists who were asked to rate programs in terms of faculty quality and their effectiveness in educating graduate students. (Author/JN)

  9. Using Runtime Analysis to Guide Model Checking of Java Programs

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes how two runtime analysis algorithms, an existing data race detection algorithm and a new deadlock detection algorithm, have been implemented to analyze Java programs. Runtime analysis is based on the idea of executing the program once. and observing the generated run to extract various kinds of information. This information can then be used to predict whether other different runs may violate some properties of interest, in addition of course to demonstrate whether the generated run itself violates such properties. These runtime analyses can be performed stand-alone to generate a set of warnings. It is furthermore demonstrated how these warnings can be used to guide a model checker, thereby reducing the search space. The described techniques have been implemented in the b e grown Java model checker called PathFinder.

  10. Formalizing Knowledge in Multi-Scale Agent-Based Simulations

    PubMed Central

    Somogyi, Endre; Sluka, James P.; Glazier, James A.

    2017-01-01

    Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused. PMID:29338063

  11. Formalizing Knowledge in Multi-Scale Agent-Based Simulations.

    PubMed

    Somogyi, Endre; Sluka, James P; Glazier, James A

    2016-10-01

    Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused.

  12. Searching Process with Raita Algorithm and its Application

    NASA Astrophysics Data System (ADS)

    Rahim, Robbi; Saleh Ahmar, Ansari; Abdullah, Dahlan; Hartama, Dedy; Napitupulu, Darmawan; Putera Utama Siahaan, Andysah; Hasan Siregar, Muhammad Noor; Nasution, Nurliana; Sundari, Siti; Sriadhi, S.

    2018-04-01

    Searching is a common process performed by many computer users, Raita algorithm is one algorithm that can be used to match and find information in accordance with the patterns entered. Raita algorithm applied to the file search application using java programming language and the results obtained from the testing process of the file search quickly and with accurate results and support many data types.

  13. Radar Array Processing of Experimental Data Via the Scan-MUSIC Algorithm

    DTIC Science & Technology

    2004-06-01

    Radar Array Processing of Experimental Data Via the Scan- MUSIC Algorithm by Canh Ly ARL-TR-3135 June 2004...Processing of Experimental Data Via the Scan- MUSIC Algorithm Canh Ly Sensors and Electron Devices Directorate, ARL...NUMBER 5b. GRANT NUMBER 4. TITLE AND SUBTITLE Radar Array Processing of Experimental Data Via the Scan- MUSIC Algorithm 5c. PROGRAM ELEMENT NUMBER 5d

  14. Adaptive Algorithms for Automated Processing of Document Images

    DTIC Science & Technology

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  15. Constraints in Genetic Programming

    NASA Technical Reports Server (NTRS)

    Janikow, Cezary Z.

    1996-01-01

    Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified.

  16. A grid matrix-based Raman spectroscopic method to characterize different cell milieu in biopsied axillary sentinel lymph nodes of breast cancer patients.

    PubMed

    Som, Dipasree; Tak, Megha; Setia, Mohit; Patil, Asawari; Sengupta, Amit; Chilakapati, C Murali Krishna; Srivastava, Anurag; Parmar, Vani; Nair, Nita; Sarin, Rajiv; Badwe, R

    2016-01-01

    Raman spectroscopy which is based upon inelastic scattering of photons has a potential to emerge as a noninvasive bedside in vivo or ex vivo molecular diagnostic tool. There is a need to improve the sensitivity and predictability of Raman spectroscopy. We developed a grid matrix-based tissue mapping protocol to acquire cellular-specific spectra that also involved digital microscopy for localizing malignant and lymphocytic cells in sentinel lymph node biopsy sample. Biosignals acquired from specific cellular milieu were subjected to an advanced supervised analytical method, i.e., cross-correlation and peak-to-peak ratio in addition to PCA and PC-LDA. We observed decreased spectral intensity as well as shift in the spectral peaks of amides and lipid bands in the completely metastatic (cancer cells) lymph nodes with high cellular density. Spectral library of normal lymphocytes and metastatic cancer cells created using the cellular specific mapping technique can be utilized to create an automated smart diagnostic tool for bench side screening of sampled lymph nodes. Spectral library of normal lymphocytes and metastatic cancer cells created using the cellular specific mapping technique can be utilized to develop an automated smart diagnostic tool for bench side screening of sampled lymph nodes supported by ongoing global research in developing better technology and signal and big data processing algorithms.

  17. A general algorithm for the construction of contour plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1981-01-01

    An algorithm is described that performs the task of drawing equal level contours on a plane, which requires interpolation in two dimensions based on data prescribed at points distributed irregularly over the plane. The approach is described in detail. The computer program that implements the algorithm is documented and listed.

  18. Derivative Free Gradient Projection Algorithms for Rotation

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2004-01-01

    A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…

  19. Bellman’s GAP—a language and compiler for dynamic programming in sequence analysis

    PubMed Central

    Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert

    2013-01-01

    Motivation: Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman’s GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. Results: In Bellman’s GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman’s GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman’s GAP as an implementation platform of ‘real-world’ bioinformatics tools. Availability: Bellman’s GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics. Contact: robert@techfak.uni-bielefeld.de Supplementary information: Supplementary data are available at Bioinformatics online PMID:23355290

  20. Adenovirus type 5 exerts genome-wide control over cellular programs governing proliferation, quiescence, and survival

    PubMed Central

    Miller, Daniel L; Myers, Chad L; Rickards, Brenden; Coller, Hilary A; Flint, S Jane

    2007-01-01

    Background Human adenoviruses, such as serotype 5 (Ad5), encode several proteins that can perturb cellular mechanisms that regulate cell cycle progression and apoptosis, as well as those that mediate mRNA production and translation. However, a global view of the effects of Ad5 infection on such programs in normal human cells is not available, despite widespread efforts to develop adenoviruses for therapeutic applications. Results We used two-color hybridization and oligonucleotide microarrays to monitor changes in cellular RNA concentrations as a function of time after Ad5 infection of quiescent, normal human fibroblasts. We observed that the expression of some 2,000 genes, about 10% of those examined, increased or decreased by a factor of two or greater following Ad5 infection, but were not altered in mock-infected cells. Consensus k-means clustering established that the temporal patterns of these changes were unexpectedly complex. Gene Ontology terms associated with cell proliferation were significantly over-represented in several clusters. The results of comparative analyses demonstrate that Ad5 infection induces reversal of the quiescence program and recapitulation of the core serum response, and that only a small subset of the observed changes in cellular gene expression can be ascribed to well characterized functions of the viral E1A and E1B proteins. Conclusion These findings establish that the impact of adenovirus infection on host cell programs is far greater than appreciated hitherto. Furthermore, they provide a new framework for investigating the molecular functions of viral early proteins and information relevant to the design of conditionally replicating adenoviral vectors. PMID:17430596

  1. Transcriptional analysis of product-concentration driven changes in cellular programs of recombinant Clostridium acetobutylicumstrains.

    PubMed

    Tummala, Seshu B; Junne, Stefan G; Paredes, Carlos J; Papoutsakis, Eleftherios T

    2003-12-30

    Antisense RNA (asRNA) downregulation alters protein expression without changing the regulation of gene expression. Downregulation of primary metabolic enzymes possibly combined with overexpression of other metabolic enzymes may result in profound changes in product formation, and this may alter the large-scale transcriptional program of the cells. DNA-array based large-scale transcriptional analysis has the potential to elucidate factors that control cellular fluxes even in the absence of proteome data. These themes are explored in the study of large-scale transcriptional analysis programs and the in vivo primary-metabolism fluxes of several related recombinant C. acetobutylicum strains: C. acetobutylicum ATCC 824(pSOS95del) (plasmid control; produces high levels of butanol snd acetone), 824(pCTFB1AS) (expresses antisense RNA against CoA transferase (ctfb1-asRNA); produces very low levels of butanol and acetone), and 824(pAADB1) (expresses ctfb1-asRNA and the alcohol-aldehyde dahydrogenase gene (aad); produce high alcohol and low acetone levels). DNA-array based transcriptional analysis revealed that the large changes in product concentrations (snd notably butanol concentration) due to ctfb1-asRNA expression alone and in combination with aad overexpression resulted in dramatic changes of the cellular transcriptome. Cluster analysis and gene expression patterns of established and putative operons involved in stress response, motility, sporulation, and fatty-acid biosynthesis indicate that these simple genetic changes dramatically alter the cellular programs of C. acetobutylicum. Comparison of gene expression and flux analysis data may point to possible flux-controling steps and suggest unknown regulatory mechanisms. Copyright 2003; Wiley Periodicals, Inc.

  2. The 1992 annual report on scientific programs: A broad research program on the sciences of complexity

    NASA Astrophysics Data System (ADS)

    In 1992 the Santa Fe Institute hosted more than 100 short- and long-term research visitors who conducted a total of 212 person-months of residential research in complex systems. To date this 1992 work has resulted in more than 50 SFI Working Papers and nearly 150 publications in the scientific literature. The Institute's book series in the sciences of complexity continues to grow, now numbering more than 20 volumes. The fifth annual complex systems summer school brought nearly 60 graduate students and postdoctoral fellows to Santa Fe for an intensive introduction to the field. Research on complex systems - the focus of work at SFI - involves an extraordinary range of topics normally studied in seemingly disparate fields. Natural systems displaying complex adaptive behavior range upwards from DNA through cells and evolutionary systems to human societies. Research models exhibiting complex behavior include spin glasses, cellular automata, and genetic algorithms. Some of the major questions facing complex systems researchers are: (1) explaining how complexity arises from the nonlinear interaction of simple components; (2) describing the mechanisms underlying high-level aggregate behavior of complex systems (such as the overt behavior of an organism, the flow of energy in an ecology, and the Gross National Product (GNP) of an economy); and (3) creating a theoretical framework to enable predictions about the likely behavior of such systems in various conditions.

  3. 1992 annual report on scientific programs: A broad research program on the sciences of complexity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-12-31

    In 1992 the Santa Fe Institute hosted more than 100 short- and long-term research visitors who conducted a total of 212 person-months of residential research in complex systems. To date this 1992 work has resulted in more than 50 SFI Working Papers and nearly 150 publications in the scientific literature. The Institute`s book series in the sciences of complexity continues to grow, now numbering more than 20 volumes. The fifth annual complex systems summer school brought nearly 60 graduate students and postdoctoral fellows to Santa Fe for an intensive introduction to the field. Research on complex systems-the focus of workmore » at SFI-involves an extraordinary range of topics normally studied in seemingly disparate fields. Natural systems displaying complex adaptive behavior range upwards from DNA through cells and evolutionary systems to human societies. Research models exhibiting complex behavior include spin glasses, cellular automata, and genetic algorithms. Some of the major questions facing complex systems researchers are: (1) explaining how complexity arises from the nonlinear interaction of simple components; (2) describing the mechanisms underlying high-level aggregate behavior of complex systems (such as the overt behavior of an organism, the flow of energy in an ecology, the GNP of an economy); and (3) creating a theoretical framework to enable predictions about the likely behavior of such systems in various conditions.« less

  4. User's Guide for Computer Program that Routes Signal Traces

    NASA Technical Reports Server (NTRS)

    Hedgley, David R., Jr.

    2000-01-01

    This disk contains both a FORTRAN computer program and the corresponding user's guide that facilitates both its incorporation into your system and its utility. The computer program represents an efficient algorithm that routes signal traces on layers of a printed circuit with both through-pins and surface mounts. The computer program included is an implementation of the ideas presented in the theoretical paper titled "A Formal Algorithm for Routing Signal Traces on a Printed Circuit Board", NASA TP-3639 published in 1996. The computer program in the "connects" file can be read with a FORTRAN compiler and readily integrated into software unique to each particular environment where it might be used.

  5. Design and Implementation of a Distributed Version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.

    1994-01-01

    Distributed NEPP is a new version of the NASA Engine Performance Program that runs in parallel on a collection of Unix workstations connected through a network. The program is fault-tolerant, efficient, and shows significant speed-up in a multi-user, heterogeneous environment. This report describes the issues involved in designing distributed NEPP, the algorithms the program uses, and the performance distributed NEPP achieves. It develops an analytical model to predict and measure the performance of the simple distribution, multiple distribution, and fault-tolerant distribution algorithms that distributed NEPP incorporates. Finally, the appendices explain how to use distributed NEPP and document the organization of the program's source code.

  6. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  7. LOADING SIMULATION PROGRAM C

    EPA Pesticide Factsheets

    LSPC is the Loading Simulation Program in C++, a watershed modeling system that includes streamlined Hydrologic Simulation Program Fortran (HSPF) algorithms for simulating hydrology, sediment, and general water quality

  8. Meta-heuristic algorithms as tools for hydrological science

    NASA Astrophysics Data System (ADS)

    Yoo, Do Guen; Kim, Joong Hoon

    2014-12-01

    In this paper, meta-heuristic optimization techniques are introduced and their applications to water resources engineering, particularly in hydrological science are introduced. In recent years, meta-heuristic optimization techniques have been introduced that can overcome the problems inherent in iterative simulations. These methods are able to find good solutions and require limited computation time and memory use without requiring complex derivatives. Simulation-based meta-heuristic methods such as Genetic algorithms (GAs) and Harmony Search (HS) have powerful searching abilities, which can occasionally overcome the several drawbacks of traditional mathematical methods. For example, HS algorithms can be conceptualized from a musical performance process and used to achieve better harmony; such optimization algorithms seek a near global optimum determined by the value of an objective function, providing a more robust determination of musical performance than can be achieved through typical aesthetic estimation. In this paper, meta-heuristic algorithms and their applications (focus on GAs and HS) in hydrological science are discussed by subject, including a review of existing literature in the field. Then, recent trends in optimization are presented and a relatively new technique such as Smallest Small World Cellular Harmony Search (SSWCHS) is briefly introduced, with a summary of promising results obtained in previous studies. As a result, previous studies have demonstrated that meta-heuristic algorithms are effective tools for the development of hydrological models and the management of water resources.

  9. RNA design using simulated SHAPE data.

    PubMed

    Lotfi, Mohadeseh; Zare-Mirakabad, Fatemeh; Montaseri, Soheila

    2018-05-03

    It has long been established that in addition to being involved in protein translation, RNA plays essential roles in numerous other cellular processes, including gene regulation and DNA replication. Such roles are known to be dictated by higher-order structures of RNA molecules. It is therefore of prime importance to find an RNA sequence that can fold to acquire a particular function that is desirable for use in pharmaceuticals and basic research. The challenge of finding an RNA sequence for a given structure is known as the RNA design problem. Although there are several algorithms to solve this problem, they mainly consider hard constraints, such as minimum free energy, to evaluate the predicted sequences. Recently, SHAPE data has emerged as a new soft constraint for RNA secondary structure prediction. To take advantage of this new experimental constraint, we report here a new method for accurate design of RNA sequences based on their secondary structures using SHAPE data as pseudo-free energy. We then compare our algorithm with four others: INFO-RNA, ERD, MODENA and RNAifold 2.0. Our algorithm precisely predicts 26 out of 29 new sequences for the structures extracted from the Rfam dataset, while the other four algorithms predict no more than 22 out of 29. The proposed algorithm is comparable to the above algorithms on RNA-SSD datasets, where they can predict up to 33 appropriate sequences for RNA secondary structures out of 34.

  10. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  11. [Computer simulation of thyroid regulatory mechanisms in health and malignancy].

    PubMed

    Abduvaliev, A A; Gil'dieva, M S; Khidirov, B N; Saĭdalieva, M; Saatov, T S

    2010-07-01

    The paper describes a computer model for regulation of the number of thyroid follicular cells in health and malignancy. The authors'computer program for mathematical simulation of the regulatory mechanisms of a thyroid follicular cellular community cannot be now referred to as good commercial products. For commercialization of this product, it is necessary to draw up a direct relation of the introduced corrected values from the actually existing normal values, such as the peripheral blood concentrations of thyroid hormones or the mean values of endocrine tissue mitotic activity. However, the described computer program has been also used in researches by our scientific group in the study of thyroid cancer. The available biological experimental data and theoretical provisions on thyroid structural and functional organization at the cellular level allow one to construct mathematical models for quantitative analysis of the regulation of the size of a cellular community of a thyroid follicle in health and abnormalities, by using the method for simulation of the regulatory mechanisms of living systems and the equations of cellular community regulatory communities.

  12. Enhancements and Algorithms for Avionic Information Processing System Design Methodology.

    DTIC Science & Technology

    1982-06-16

    programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP

  13. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  14. Special-effect edit detection using VideoTrails: a comparison with existing techniques

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1998-12-01

    Video segmentation plays an integral role in many multimedia applications, such as digital libraries, content management systems, and various other video browsing, indexing, and retrieval systems. Many algorithms for segmentation of video have appeared within the past few years. Most of these algorithms perform well on cuts, but yield poor performance on gradual transitions or special effects edits. A complete video segmentation system must also achieve good performance on special effect edit detection. In this paper, we discuss the performance of our Video Trails-based algorithms, with other existing special effect edit-detection algorithms within the literature. Results from experiments testing for the ability to detect edits from TV programs, ranging from commercials to news magazine programs, including diverse special effect edits, which we have introduced.

  15. Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L

    2017-10-01

    The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.

  16. MyDTW - Dynamic Time Warping program for stratigraphical time series

    NASA Astrophysics Data System (ADS)

    Kotov, Sergey; Paelike, Heiko

    2017-04-01

    One of the general tasks in many geological disciplines is matching of one time or space signal to another. It can be classical correlation between two cores or cross-sections in sedimentology or marine geology. For example, tuning a paleoclimatic signal to a target curve, driven by variations in the astronomical parameters, is a powerful technique to construct accurate time scales. However, these methods can be rather time-consuming and can take ours of routine work even with the help of special semi-automatic software. Therefore, different approaches to automate the processes have been developed during last decades. Some of them are based on classical statistical cross-correlations such as the 'Correlator' after Olea [1]. Another ones use modern ideas of dynamic programming. A good example is as an algorithm developed by Lisiecki and Lisiecki [2] or dynamic time warping based algorithm after Pälike [3]. We introduce here an algorithm and computer program, which are also stemmed from the Dynamic Time Warping algorithm class. Unlike the algorithm of Lisiecki and Lisiecki, MyDTW does not lean on a set of penalties to follow geological logics, but on a special internal structure and specific constrains. It differs also from [3] in basic ideas of implementation and constrains design. The algorithm is implemented as a computer program with a graphical user interface using Free Pascal and Lazarus IDE and available for Windows, Mac OS, and Linux. Examples with synthetic and real data are demonstrated. Program is available for free download at http://www.marum.de/Sergey_Kotov.html . References: 1. Olea, R.A. Expert systems for automated correlation and interpretation of wireline logs // Math Geol (1994) 26: 879. doi:10.1007/BF02083420 2. Lisiecki L. and Lisiecki P. Application of dynamic programming to the correlation of paleoclimate records // Paleoceanography (2002), Volume 17, Issue 4, pp. 1-1, CiteID 1049, doi: 10.1029/2001PA000733 3. Pälike, H. Extending the astronomical calibration of the Geological Time Scale PhD thesis, University of Cambridge, (2002)

  17. Versatile and declarative dynamic programming using pair algebras.

    PubMed

    Steffen, Peter; Giegerich, Robert

    2005-09-12

    Dynamic programming is a widely used programming technique in bioinformatics. In sharp contrast to the simplicity of textbook examples, implementing a dynamic programming algorithm for a novel and non-trivial application is a tedious and error prone task. The algebraic dynamic programming approach seeks to alleviate this situation by clearly separating the dynamic programming recurrences and scoring schemes. Based on this programming style, we introduce a generic product operation of scoring schemes. This leads to a remarkable variety of applications, allowing us to achieve optimizations under multiple objective functions, alternative solutions and backtracing, holistic search space analysis, ambiguity checking, and more, without additional programming effort. We demonstrate the method on several applications for RNA secondary structure prediction. The product operation as introduced here adds a significant amount of flexibility to dynamic programming. It provides a versatile testbed for the development of new algorithmic ideas, which can immediately be put to practice.

  18. Gene Expression Network Reconstruction by Convex Feature Selection when Incorporating Genetic Perturbations

    PubMed Central

    Logsdon, Benjamin A.; Mezey, Jason

    2010-01-01

    Cellular gene expression measurements contain regulatory information that can be used to discover novel network relationships. Here, we present a new algorithm for network reconstruction powered by the adaptive lasso, a theoretically and empirically well-behaved method for selecting the regulatory features of a network. Any algorithms designed for network discovery that make use of directed probabilistic graphs require perturbations, produced by either experiments or naturally occurring genetic variation, to successfully infer unique regulatory relationships from gene expression data. Our approach makes use of appropriately selected cis-expression Quantitative Trait Loci (cis-eQTL), which provide a sufficient set of independent perturbations for maximum network resolution. We compare the performance of our network reconstruction algorithm to four other approaches: the PC-algorithm, QTLnet, the QDG algorithm, and the NEO algorithm, all of which have been used to reconstruct directed networks among phenotypes leveraging QTL. We show that the adaptive lasso can outperform these algorithms for networks of ten genes and ten cis-eQTL, and is competitive with the QDG algorithm for networks with thirty genes and thirty cis-eQTL, with rich topologies and hundreds of samples. Using this novel approach, we identify unique sets of directed relationships in Saccharomyces cerevisiae when analyzing genome-wide gene expression data for an intercross between a wild strain and a lab strain. We recover novel putative network relationships between a tyrosine biosynthesis gene (TYR1), and genes involved in endocytosis (RCY1), the spindle checkpoint (BUB2), sulfonate catabolism (JLP1), and cell-cell communication (PRM7). Our algorithm provides a synthesis of feature selection methods and graphical model theory that has the potential to reveal new directed regulatory relationships from the analysis of population level genetic and gene expression data. PMID:21152011

  19. The Joint Polar Satellite System (JPSS) Program's Algorithm Change Process (ACP): Past, Present and Future

    NASA Technical Reports Server (NTRS)

    Griffin, Ashley

    2017-01-01

    The Joint Polar Satellite System (JPSS) Program Office is the supporting organization for the Suomi National Polar Orbiting Partnership (S-NPP) and JPSS-1 satellites. S-NPP carries the following sensors: VIIRS, CrIS, ATMS, OMPS, and CERES with instruments that ultimately produce over 25 data products that cover the Earths weather, oceans, and atmosphere. A team of scientists and engineers from all over the United States document, monitor and fix errors in operational software code or documentation with the algorithm change process (ACP) to ensure the success of the S-NPP and JPSS 1 missions by maintaining quality and accuracy of the data products the scientific community relies on. This poster will outline the programs algorithm change process (ACP), identify the various users and scientific applications of our operational data products and highlight changes that have been made to the ACP to accommodate operating system upgrades to the JPSS programs Interface Data Processing Segment (IDPS), so that the program is ready for the transition to the 2017 JPSS-1 satellite mission and beyond.

  20. Set covering algorithm, a subprogram of the scheduling algorithm for mission planning and logistic evaluation

    NASA Technical Reports Server (NTRS)

    Chang, H.

    1976-01-01

    A computer program using Lemke, Salkin and Spielberg's Set Covering Algorithm (SCA) to optimize a traffic model problem in the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) was documented. SCA forms a submodule of SAMPLE and provides for input and output, subroutines, and an interactive feature for performing the optimization and arranging the results in a readily understandable form for output.

  1. An algorithm for generating all possible 2(p-q) fractional factorial designs and its use in scientific experimentation

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1973-01-01

    An algorithm and computer program are presented for generating all the distinct 2(p-q) fractional factorial designs. Some applications of this algorithm to the construction of tables of designs and of designs for nonstandard situations and its use in Bayesian design are discussed. An appendix includes a discussion of an actual experiment whose design was facilitated by the algorithm.

  2. Geometrical characterization of fluorescently labelled surfaces from noisy 3D microscopy data.

    PubMed

    Shelton, Elijah; Serwane, Friedhelm; Campàs, Otger

    2018-03-01

    Modern fluorescence microscopy enables fast 3D imaging of biological and inert systems alike. In many studies, it is important to detect the surface of objects and quantitatively characterize its local geometry, including its mean curvature. We present a fully automated algorithm to determine the location and curvatures of an object from 3D fluorescence images, such as those obtained using confocal or light-sheet microscopy. The algorithm aims at reconstructing surface labelled objects with spherical topology and mild deformations from the spherical geometry with high accuracy, rather than reconstructing arbitrarily deformed objects with lower fidelity. Using both synthetic data with known geometrical characteristics and experimental data of spherical objects, we characterize the algorithm's accuracy over the range of conditions and parameters typically encountered in 3D fluorescence imaging. We show that the algorithm can detect the location of the surface and obtain a map of local mean curvatures with relative errors typically below 2% and 20%, respectively, even in the presence of substantial levels of noise. Finally, we apply this algorithm to analyse the shape and curvature map of fluorescently labelled oil droplets embedded within multicellular aggregates and deformed by cellular forces. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  3. Study on bi-directional pedestrian movement using ant algorithms

    NASA Astrophysics Data System (ADS)

    Sibel, Gokce; Ozhan, Kayacan

    2016-01-01

    A cellular automata model is proposed to simulate bi-directional pedestrian flow. Pedestrian movement is investigated by using ant algorithms. Ants communicate with each other by dropping a chemical, called a pheromone, on the substrate while crawling forward. Similarly, it is considered that oppositely moving pedestrians drop ‘visual pheromones’ on their way and the visual pheromones might cause attractive or repulsive interactions. This pheromenon is introduced into modelling the pedestrians’ walking preference. In this way, the decision-making process of pedestrians will be based on ‘the instinct of following’. At some densities, the relationships of velocity-density and flux-density are analyzed for different evaporation rates of visual pheromones. Lane formation and phase transition are observed for certain evaporation rates of visual pheromones.

  4. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  5. Empirical comparison of heuristic load distribution in point-to-point multicomputer networks

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.

    1990-01-01

    The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.

  6. Space shuttle propulsion parameter estimation using optional estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A regression analyses on tabular aerodynamic data provided. A representative aerodynamic model for coefficient estimation. It also reduced the storage requirements for the "normal' model used to check out the estimation algorithms. The results of the regression analyses are presented. The computer routines for the filter portion of the estimation algorithm and the :"bringing-up' of the SRB predictive program on the computer was developed. For the filter program, approximately 54 routines were developed. The routines were highly subsegmented to facilitate overlaying program segments within the partitioned storage space on the computer.

  7. Topology Optimization - Engineering Contribution to Architectural Design

    NASA Astrophysics Data System (ADS)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2017-10-01

    The idea of the topology optimization is to find within a considered design domain the distribution of material that is optimal in some sense. Material, during optimization process, is redistributed and parts that are not necessary from objective point of view are removed. The result is a solid/void structure, for which an objective function is minimized. This paper presents an application of topology optimization to multi-material structures. The design domain defined by shape of a structure is divided into sub-regions, for which different materials are assigned. During design process material is relocated, but only within selected region. The proposed idea has been inspired by architectural designs like multi-material facades of buildings. The effectiveness of topology optimization is determined by proper choice of numerical optimization algorithm. This paper utilises very efficient heuristic method called Cellular Automata. Cellular Automata are mathematical, discrete idealization of a physical systems. Engineering implementation of Cellular Automata requires decomposition of the design domain into a uniform lattice of cells. It is assumed, that the interaction between cells takes place only within the neighbouring cells. The interaction is governed by simple, local update rules, which are based on heuristics or physical laws. The numerical studies show, that this method can be attractive alternative to traditional gradient-based algorithms. The proposed approach is evaluated by selected numerical examples of multi-material bridge structures, for which various material configurations are examined. The numerical studies demonstrated a significant influence the material sub-regions location on the final topologies. The influence of assumed volume fraction on final topologies for multi-material structures is also observed and discussed. The results of numerical calculations show, that this approach produces different results as compared with classical one-material problems.

  8. Behavior of optical properties of coagulated blood sample at 633 nm wavelength

    NASA Astrophysics Data System (ADS)

    Morales Cruzado, Beatriz; Vázquez y Montiel, Sergio; Delgado Atencio, José Alberto

    2011-03-01

    Determination of tissue optical parameters is fundamental for application of light in either diagnostics or therapeutical procedures. However, in samples of biological tissue in vitro, the optical properties are modified by cellular death or cellular agglomeration that can not be avoided. This phenomena change the propagation of light within the biological sample. Optical properties of human blood tissue were investigated in vitro at 633 nm using an optical setup that includes a double integrating sphere system. We measure the diffuse transmittance and diffuse reflectance of the blood sample and compare these physical properties with those obtained by Monte Carlo Multi-Layered (MCML). The extraction of the optical parameters: absorption coefficient μa, scattering coefficient μs and anisotropic factor g from the measurements were carried out using a Genetic Algorithm, in which the search procedure is based in the evolution of a population due to selection of the best individual, evaluated by a function that compares the diffuse transmittance and diffuse reflectance of those individuals with the experimental ones. The algorithm converges rapidly to the best individual, extracting the optical parameters of the sample. We compare our results with those obtained by using other retrieve procedures. We found that the scattering coefficient and the anisotropic factor change dramatically due to the formation of clusters.

  9. Super-Resolution Imaging Strategies for Cell Biologists Using a Spinning Disk Microscope

    PubMed Central

    Hosny, Neveen A.; Song, Mingying; Connelly, John T.; Ameer-Beg, Simon; Knight, Martin M.; Wheeler, Ann P.

    2013-01-01

    In this study we use a spinning disk confocal microscope (SD) to generate super-resolution images of multiple cellular features from any plane in the cell. We obtain super-resolution images by using stochastic intensity fluctuations of biological probes, combining Photoactivation Light-Microscopy (PALM)/Stochastic Optical Reconstruction Microscopy (STORM) methodologies. We compared different image analysis algorithms for processing super-resolution data to identify the most suitable for analysis of particular cell structures. SOFI was chosen for X and Y and was able to achieve a resolution of ca. 80 nm; however higher resolution was possible >30 nm, dependant on the super-resolution image analysis algorithm used. Our method uses low laser power and fluorescent probes which are available either commercially or through the scientific community, and therefore it is gentle enough for biological imaging. Through comparative studies with structured illumination microscopy (SIM) and widefield epifluorescence imaging we identified that our methodology was advantageous for imaging cellular structures which are not immediately at the cell-substrate interface, which include the nuclear architecture and mitochondria. We have shown that it was possible to obtain two coloured images, which highlights the potential this technique has for high-content screening, imaging of multiple epitopes and live cell imaging. PMID:24130668

  10. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns

    NASA Astrophysics Data System (ADS)

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-09-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  11. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns.

    PubMed

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-01-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  12. NeuroCa: integrated framework for systematic analysis of spatiotemporal neuronal activity patterns from large-scale optical recording data

    PubMed Central

    Jang, Min Jee; Nam, Yoonkey

    2015-01-01

    Abstract. Optical recording facilitates monitoring the activity of a large neural network at the cellular scale, but the analysis and interpretation of the collected data remain challenging. Here, we present a MATLAB-based toolbox, named NeuroCa, for the automated processing and quantitative analysis of large-scale calcium imaging data. Our tool includes several computational algorithms to extract the calcium spike trains of individual neurons from the calcium imaging data in an automatic fashion. Two algorithms were developed to decompose the imaging data into the activity of individual cells and subsequently detect calcium spikes from each neuronal signal. Applying our method to dense networks in dissociated cultures, we were able to obtain the calcium spike trains of ∼1000 neurons in a few minutes. Further analyses using these data permitted the quantification of neuronal responses to chemical stimuli as well as functional mapping of spatiotemporal patterns in neuronal firing within the spontaneous, synchronous activity of a large network. These results demonstrate that our method not only automates time-consuming, labor-intensive tasks in the analysis of neural data obtained using optical recording techniques but also provides a systematic way to visualize and quantify the collective dynamics of a network in terms of its cellular elements. PMID:26229973

  13. Self-organizing feature maps for dynamic control of radio resources in CDMA microcellular networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1998-03-01

    The application of artificial neural networks to the channel assignment problem for cellular code-division multiple access (CDMA) cellular networks has previously been investigated. CDMA takes advantage of voice activity and spatial isolation because its capacity is only interference limited, unlike time-division multiple access (TDMA) and frequency-division multiple access (FDMA) where capacities are bandwidth-limited. Any reduction in interference in CDMA translates linearly into increased capacity. To satisfy the high demands for new services and improved connectivity for mobile communications, microcellular and picocellular systems are being introduced. For these systems, there is a need to develop robust and efficient management procedures for the allocation of power and spectrum to maximize radio capacity. Topology-conserving mappings play an important role in the biological processing of sensory inputs. The same principles underlying Kohonen's self-organizing feature maps (SOFMs) are applied to the adaptive control of radio resources to minimize interference, hence, maximize capacity in direct-sequence (DS) CDMA networks. The approach based on SOFMs is applied to some published examples of both theoretical and empirical models of DS/CDMA microcellular networks in metropolitan areas. The results of the approach for these examples are informally compared to the performance of algorithms, based on Hopfield- Tank neural networks and on genetic algorithms, for the channel assignment problem.

  14. Optimization Techniques for Analysis of Biological and Social Networks

    DTIC Science & Technology

    2012-03-28

    analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational

  15. A Teaching Approach from the Exhaustive Search Method to the Needleman-Wunsch Algorithm

    ERIC Educational Resources Information Center

    Xu, Zhongneng; Yang, Yayun; Huang, Beibei

    2017-01-01

    The Needleman-Wunsch algorithm has become one of the core algorithms in bioinformatics; however, this programming requires more suitable explanations for students with different major backgrounds. In supposing sample sequences and using a simple store system, the connection between the exhaustive search method and the Needleman-Wunsch algorithm…

  16. The ranking algorithm of the Coach browser for the UMLS metathesaurus.

    PubMed Central

    Harbourt, A. M.; Syed, E. J.; Hole, W. T.; Kingsland, L. C.

    1993-01-01

    This paper presents the novel ranking algorithm of the Coach Metathesaurus browser which is a major module of the Coach expert search refinement program. An example shows how the ranking algorithm can assist in creating a list of candidate terms useful in augmenting a suboptimal Grateful Med search of MEDLINE. PMID:8130570

  17. A Functional Programming Approach to AI Search Algorithms

    ERIC Educational Resources Information Center

    Panovics, Janos

    2012-01-01

    The theory and practice of search algorithms related to state-space represented problems form the major part of the introductory course of Artificial Intelligence at most of the universities and colleges offering a degree in the area of computer science. Students usually meet these algorithms only in some imperative or object-oriented language…

  18. Injecting Errors for Testing Built-In Test Software

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  19. Algorithms for Maneuvering Spacecraft Around Small Bodies

    NASA Technical Reports Server (NTRS)

    Acikmese, A. Bechet; Bayard, David

    2006-01-01

    A document describes mathematical derivations and applications of autonomous guidance algorithms for maneuvering spacecraft in the vicinities of small astronomical bodies like comets or asteroids. These algorithms compute fuel- or energy-optimal trajectories for typical maneuvers by solving the associated optimal-control problems with relevant control and state constraints. In the derivations, these problems are converted from their original continuous (infinite-dimensional) forms to finite-dimensional forms through (1) discretization of the time axis and (2) spectral discretization of control inputs via a finite number of Chebyshev basis functions. In these doubly discretized problems, the Chebyshev coefficients are the variables. These problems are, variously, either convex programming problems or programming problems that can be convexified. The resulting discrete problems are convex parameter-optimization problems; this is desirable because one can take advantage of very efficient and robust algorithms that have been developed previously and are well established for solving such problems. These algorithms are fast, do not require initial guesses, and always converge to global optima. Following the derivations, the algorithms are demonstrated by applying them to numerical examples of flyby, descent-to-hover, and ascent-from-hover maneuvers.

  20. A survey on keeler’s theorem and application of symmetric group for swapping game

    NASA Astrophysics Data System (ADS)

    Pratama, Yohanssen; Prakasa, Yohenry

    2017-01-01

    An episode of Futurama features two-body mind-switching machine which will not work more than once on the same pair of bodies. The problem is “Can the switching be undone so as to restore all minds to their original bodies?” Ken Keeler found an algorithm that undoes any mind-scrambling permutation, and Lihua Huang found the refinement of it. We look on the process how the puzzle can be modeled in terms group theory and using symmetric group to solve it and find the most efficient way of it. After that we will try to build the algorithm to implement it into the computer program and see the effect of the transposition notion into the algorithm complexity. The number of steps that given by the algorithm will be different and one of algorithms will have the advantage in terms of efficiency. We compare Ken Keeler and Lihua Huang algorithms to see is there any difference if we run it in the computer program, although the complexity could be remain the same.

Top