Gap filling of 3-D microvascular networks by tensor voting.
Risser, L; Plouraboue, F; Descombes, X
2008-05-01
We present a new algorithm which merges discontinuities in 3-D images of tubular structures presenting undesirable gaps. The application of the proposed method is mainly associated to large 3-D images of microvascular networks. In order to recover the real network topology, we need to fill the gaps between the closest discontinuous vessels. The algorithm presented in this paper aims at achieving this goal. This algorithm is based on the skeletonization of the segmented network followed by a tensor voting method. It permits to merge the most common kinds of discontinuities found in microvascular networks. It is robust, easy to use, and relatively fast. The microvascular network images were obtained using synchrotron tomography imaging at the European Synchrotron Radiation Facility. These images exhibit samples of intracortical networks. Representative results are illustrated.
Preliminary user's manuals for DYNA3D and DYNAP. [In FORTRAN IV for CDC 7600 and Cray-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallquist, J. O.
1979-10-01
This report provides a user's manual for DYNA3D, an explicit three-dimensional finite-element code for analyzing the large deformation dynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 8-node solid elements, and the equations of motion are integrated by the central difference method. Post-processors for DYNA3D include GRAPE for plotting deformed shapes and stress contours and DYNAP for plotting time histories. A user's manual formore » DYNAP is also provided. 23 figures.« less
Navarro, Gonzalo; Raffinot, Mathieu
2003-01-01
The problem of fast exact and approximate searching for a pattern that contains classes of characters and bounded size gaps (CBG) in a text has a wide range of applications, among which a very important one is protein pattern matching (for instance, one PROSITE protein site is associated with the CBG [RK] - x(2,3) - [DE] - x(2,3) - Y, where the brackets match any of the letters inside, and x(2,3) a gap of length between 2 and 3). Currently, the only way to search for a CBG in a text is to convert it into a full regular expression (RE). However, a RE is more sophisticated than a CBG, and searching for it with a RE pattern matching algorithm complicates the search and makes it slow. This is the reason why we design in this article two new practical CBG matching algorithms that are much simpler and faster than all the RE search techniques. The first one looks exactly once at each text character. The second one does not need to consider all the text characters, and hence it is usually faster than the first one, but in bad cases may have to read the same text character more than once. We then propose a criterion based on the form of the CBG to choose a priori the fastest between both. We also show how to search permitting a few mistakes in the occurrences. We performed many practical experiments using the PROSITE database, and all of them show that our algorithms are the fastest in virtually all cases.
Approximate matching of regular expressions.
Myers, E W; Miller, W
1989-01-01
Given a sequence A and regular expression R, the approximate regular expression matching problem is to find a sequence matching R whose optimal alignment with A is the highest scoring of all such sequences. This paper develops an algorithm to solve the problem in time O(MN), where M and N are the lengths of A and R. Thus, the time requirement is asymptotically no worse than for the simpler problem of aligning two fixed sequences. Our method is superior to an earlier algorithm by Wagner and Seiferas in several ways. First, it treats real-valued costs, in addition to integer costs, with no loss of asymptotic efficiency. Second, it requires only O(N) space to deliver just the score of the best alignment. Finally, its structure permits implementation techniques that make it extremely fast in practice. We extend the method to accommodate gap penalties, as required for typical applications in molecular biology, and further refine it to search for sub-strings of A that strongly align with a sequence in R, as required for typical data base searches. We also show how to deliver an optimal alignment between A and R in only O(N + log M) space using O(MN log M) time. Finally, an O(MN(M + N) + N2log N) time algorithm is presented for alignment scoring schemes where the cost of a gap is an arbitrary increasing function of its length.
ERIC Educational Resources Information Center
Young, Forrest W.
A model permitting construction of algorithms for the polynomial conjoint analysis of similarities is presented. This model, which is based on concepts used in nonmetric scaling, permits one to obtain the best approximate solution. The concepts used to construct nonmetric scaling algorithms are reviewed. Finally, examples of algorithmic models for…
High throughput light absorber discovery, Part 1: An algorithm for automated tauc analysis
Suram, Santosh K.; Newhouse, Paul F.; Gregoire, John M.
2016-09-23
High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe 2O 3, Cu 2V 2O 7, and BiVOmore » 4. Here, the applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.« less
Thakur, Shalabh; Guttman, David S
2016-06-30
Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .
Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon
2018-01-01
We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.
Balancing Contention and Synchronization on the Intel Paragon
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.; Nicol, David M.
1996-01-01
The Intel Paragon is a mesh-connected distributed memory parallel computer. It uses an oblivious and deterministic message routing algorithm: this permits us to develop highly optimized schedules for frequently needed communication patterns. The complete exchange is one such pattern. Several approaches are available for carrying it out on the mesh. We study an algorithm developed by Scott. This algorithm assumes that a communication link can carry one message at a time and that a node can only transmit one message at a time. It requires global synchronization to enforce a schedule of transmissions. Unfortunately global synchronization has substantial overhead on the Paragon. At the same time the powerful interconnection mechanism of this machine permits 2 or 3 messages to share a communication link with minor overhead. It can also overlap multiple message transmission from the same node to some extent. We develop a generalization of Scott's algorithm that executes complete exchange with a prescribed contention. Schedules that incur greater contention require fewer synchronization steps. This permits us to tradeoff contention against synchronization overhead. We describe the performance of this algorithm and compare it with Scott's original algorithm as well as with a naive algorithm that does not take interconnection structure into account. The Bounded contention algorithm is always better than Scott's algorithm and outperforms the naive algorithm for all but the smallest message sizes. The naive algorithm fails to work on meshes larger than 12 x 12. These results show that due consideration of processor interconnect and machine performance parameters is necessary to obtain peak performance from the Paragon and its successor mesh machines.
Bellman's GAP--a language and compiler for dynamic programming in sequence analysis.
Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert
2013-03-01
Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman's GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. In Bellman's GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman's GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman's GAP as an implementation platform of 'real-world' bioinformatics tools. Bellman's GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics.
Switching algorithm for maglev train double-modular redundant positioning sensors.
He, Ning; Long, Zhiqiang; Xue, Song
2012-01-01
High-resolution positioning for maglev trains is implemented by detecting the tooth-slot structure of the long stator installed along the rail, but there are large joint gaps between long stator sections. When a positioning sensor is below a large joint gap, its positioning signal is invalidated, thus double-modular redundant positioning sensors are introduced into the system. This paper studies switching algorithms for these redundant positioning sensors. At first, adaptive prediction is applied to the sensor signals. The prediction errors are used to trigger sensor switching. In order to enhance the reliability of the switching algorithm, wavelet analysis is introduced to suppress measuring disturbances without weakening the signal characteristics reflecting the stator joint gap based on the correlation between the wavelet coefficients of adjacent scales. The time delay characteristics of the method are analyzed to guide the algorithm simplification. Finally, the effectiveness of the simplified switching algorithm is verified through experiments.
Switching Algorithm for Maglev Train Double-Modular Redundant Positioning Sensors
He, Ning; Long, Zhiqiang; Xue, Song
2012-01-01
High-resolution positioning for maglev trains is implemented by detecting the tooth-slot structure of the long stator installed along the rail, but there are large joint gaps between long stator sections. When a positioning sensor is below a large joint gap, its positioning signal is invalidated, thus double-modular redundant positioning sensors are introduced into the system. This paper studies switching algorithms for these redundant positioning sensors. At first, adaptive prediction is applied to the sensor signals. The prediction errors are used to trigger sensor switching. In order to enhance the reliability of the switching algorithm, wavelet analysis is introduced to suppress measuring disturbances without weakening the signal characteristics reflecting the stator joint gap based on the correlation between the wavelet coefficients of adjacent scales. The time delay characteristics of the method are analyzed to guide the algorithm simplification. Finally, the effectiveness of the simplified switching algorithm is verified through experiments. PMID:23112657
[Algorithm for the automated processing of rheosignals].
Odinets, G S
1988-01-01
Algorithm for rheosignals recognition for a microprocessing device with a representation apparatus and with automated and manual cursor control was examined. The algorithm permits to automate rheosignals registrating and processing taking into account their changeability.
A generalized global alignment algorithm.
Huang, Xiaoqiu; Chao, Kun-Mao
2003-01-22
Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.
Nonequal iteration directional filters permit selective clearance of ripples in passband circuits
NASA Technical Reports Server (NTRS)
Kurpis, G. P.
1970-01-01
Modified directional filter is comprised of alternate pairs of dielectric and air gap filter sections with unequal electrical lengths. Filter provides more flexibility in choosing dielectric material thickness and permits switching from specially ground to standard thicknesses.
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Low-energy electron inelastic mean free paths for liquid water
NASA Astrophysics Data System (ADS)
Nguyen-Truong, Hieu T.
2018-04-01
We improve the Mermin–Penn algorithm (MPA) for determining the energy loss function (ELF) within the dielectric formalism. The present algorithm is applicable not only to real metals, but also to materials that have an energy gap in the excitation spectrum. Applying the improved MPA to liquid water, we show that the present algorithm is able to address the ELF overestimation at the energy gap, and the calculated results are in good agreement with experimental data.
LAI inversion algorithm based on directional reflectance kernels.
Tang, S; Chen, J M; Zhu, Q; Li, X; Chen, M; Sun, R; Zhou, Y; Deng, F; Xie, D
2007-11-01
Leaf area index (LAI) is an important ecological and environmental parameter. A new LAI algorithm is developed using the principles of ground LAI measurements based on canopy gap fraction. First, the relationship between LAI and gap fraction at various zenith angles is derived from the definition of LAI. Then, the directional gap fraction is acquired from a remote sensing bidirectional reflectance distribution function (BRDF) product. This acquisition is obtained by using a kernel driven model and a large-scale directional gap fraction algorithm. The algorithm has been applied to estimate a LAI distribution in China in mid-July 2002. The ground data acquired from two field experiments in Changbai Mountain and Qilian Mountain were used to validate the algorithm. To resolve the scale discrepancy between high resolution ground observations and low resolution remote sensing data, two TM images with a resolution approaching the size of ground plots were used to relate the coarse resolution LAI map to ground measurements. First, an empirical relationship between the measured LAI and a vegetation index was established. Next, a high resolution LAI map was generated using the relationship. The LAI value of a low resolution pixel was calculated from the area-weighted sum of high resolution LAIs composing the low resolution pixel. The results of this comparison showed that the inversion algorithm has an accuracy of 82%. Factors that may influence the accuracy are also discussed in this paper.
Automatic Road Gap Detection Using Fuzzy Inference System
NASA Astrophysics Data System (ADS)
Hashemi, S.; Valadan Zoej, M. J.; Mokhtarzadeh, M.
2011-09-01
Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1) Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2) Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3) Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4) Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.
Multiple sequence alignment using multi-objective based bacterial foraging optimization algorithm.
Rani, R Ranjani; Ramyachitra, D
2016-12-01
Multiple sequence alignment (MSA) is a widespread approach in computational biology and bioinformatics. MSA deals with how the sequences of nucleotides and amino acids are sequenced with possible alignment and minimum number of gaps between them, which directs to the functional, evolutionary and structural relationships among the sequences. Still the computation of MSA is a challenging task to provide an efficient accuracy and statistically significant results of alignments. In this work, the Bacterial Foraging Optimization Algorithm was employed to align the biological sequences which resulted in a non-dominated optimal solution. It employs Multi-objective, such as: Maximization of Similarity, Non-gap percentage, Conserved blocks and Minimization of gap penalty. BAliBASE 3.0 benchmark database was utilized to examine the proposed algorithm against other methods In this paper, two algorithms have been proposed: Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC) and Bacterial Foraging Optimization Algorithm. It was found that Hybrid Genetic Algorithm with Artificial Bee Colony performed better than the existing optimization algorithms. But still the conserved blocks were not obtained using GA-ABC. Then BFO was used for the alignment and the conserved blocks were obtained. The proposed Multi-Objective Bacterial Foraging Optimization Algorithm (MO-BFO) was compared with widely used MSA methods Clustal Omega, Kalign, MUSCLE, MAFFT, Genetic Algorithm (GA), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC). The final results show that the proposed MO-BFO algorithm yields better alignment than most widely used methods. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Problems in Analyzing Time Series with Gaps and Their Solution with the WinABD Software Package
NASA Astrophysics Data System (ADS)
Desherevskii, A. V.; Zhuravlev, V. I.; Nikolsky, A. N.; Sidorin, A. Ya.
2017-12-01
Technologies for the analysis of time series with gaps are considered. Some algorithms of signal extraction (purification) and evaluation of its characteristics, such as rhythmic components, are discussed for series with gaps. Examples are given for the analysis of data obtained during long-term observations at the Garm geophysical test site and in other regions. The technical solutions used in the WinABD software are considered to most efficiently arrange the operation of relevant algorithms in the presence of observational defects.
A survey of provably correct fault-tolerant clock synchronization techniques
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1988-01-01
Six provably correct fault-tolerant clock synchronization algorithms are examined. These algorithms are all presented in the same notation to permit easier comprehension and comparison. The advantages and disadvantages of the different techniques are examined and issues related to the implementation of these algorithms are discussed. The paper argues for the use of such algorithms in life-critical applications.
Bellman’s GAP—a language and compiler for dynamic programming in sequence analysis
Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert
2013-01-01
Motivation: Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman’s GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. Results: In Bellman’s GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman’s GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman’s GAP as an implementation platform of ‘real-world’ bioinformatics tools. Availability: Bellman’s GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics. Contact: robert@techfak.uni-bielefeld.de Supplementary information: Supplementary data are available at Bioinformatics online PMID:23355290
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
Magnetic reconnection launcher
Cowan, M.
1987-04-06
An electromagnetic launcher includes a plurality of electrical stages which are energized sequentially in the launcher with the passage of a projectiles. Each stage of the launcher includes two or more coils which are arranged coaxially on either closed-loop or straight lines to form gaps between their ends. The projectile has an electrically conductive gap-portion that passes through all the gaps of all the stages in a direction transverse to the axes of the coils. The coils receive an electric current, store magnetic energy, and convert a significant portion of the stored magnetic energy into kinetic energy of the projectile moves through the gap. The magnetic polarity of the opposing coils is in the same direction, e.g. N-S-N-S. A gap portion of the projectile may be made from aluminum and is propelled by the reconnection of magnetic flux stored in the coils which causes accelerating forces to act upon the projectile and at the horizontal surfaces of the projectile near its rear. The gap portion of the projectile may be flat, rectangular and longer than the length of the opposing coils. The gap portion of the projectile permits substantially unrestricted distribution of the induced currents so that current densities are only high where the useful magnetic force is high. This allows designs which permit ohmic oblation from the rear surfaces of the gap portion of the projectile allowing much high velocities to be achieved. An electric power apparatus controls the electric power supplied to the opposing coils until the gap portion of the projectile substantially occupies the gap between the coils, at which time the coils are supplied with peak current quickly. 8 figs.
Shirey, Robert J; Wu, Hsinshun Terry
2018-01-01
This study quantifies the dosimetric accuracy of a commercial treatment planning system as functions of treatment depth, air gap, and range shifter thickness for superficial pencil beam scanning proton therapy treatments. The RayStation 6 pencil beam and Monte Carlo dose engines were each used to calculate the dose distributions for a single treatment plan with varying range shifter air gaps. Central axis dose values extracted from each of the calculated plans were compared to dose values measured with a calibrated PTW Markus chamber at various depths in RW3 solid water. Dose was measured at 12 depths, ranging from the surface to 5 cm, for each of the 18 different air gaps, which ranged from 0.5 to 28 cm. TPS dosimetric accuracy, defined as the ratio of calculated dose relative to the measured dose, was plotted as functions of depth and air gap for the pencil beam and Monte Carlo dose algorithms. The accuracy of the TPS pencil beam dose algorithm was found to be clinically unacceptable at depths shallower than 3 cm with air gaps wider than 10 cm, and increased range shifter thickness only added to the dosimetric inaccuracy of the pencil beam algorithm. Each configuration calculated with Monte Carlo was determined to be clinically acceptable. Further comparisons of the Monte Carlo dose algorithm to the measured spread-out Bragg Peaks of multiple fields used during machine commissioning verified the dosimetric accuracy of Monte Carlo in a variety of beam energies and field sizes. Discrepancies between measured and TPS calculated dose values can mainly be attributed to the ability (or lack thereof) of the TPS pencil beam dose algorithm to properly model secondary proton scatter generated in the range shifter. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Adame, J.; Warzel, S.
2015-11-01
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adame, J.; Warzel, S., E-mail: warzel@ma.tum.de
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
Undecidability of the spectral gap.
Cubitt, Toby S; Perez-Garcia, David; Wolf, Michael M
2015-12-10
The spectral gap--the energy difference between the ground state and first excited state of a system--is central to quantum many-body physics. Many challenging open problems, such as the Haldane conjecture, the question of the existence of gapped topological spin liquid phases, and the Yang-Mills gap conjecture, concern spectral gaps. These and other problems are particular cases of the general spectral gap problem: given the Hamiltonian of a quantum many-body system, is it gapped or gapless? Here we prove that this is an undecidable problem. Specifically, we construct families of quantum spin systems on a two-dimensional lattice with translationally invariant, nearest-neighbour interactions, for which the spectral gap problem is undecidable. This result extends to undecidability of other low-energy properties, such as the existence of algebraically decaying ground-state correlations. The proof combines Hamiltonian complexity techniques with aperiodic tilings, to construct a Hamiltonian whose ground state encodes the evolution of a quantum phase-estimation algorithm followed by a universal Turing machine. The spectral gap depends on the outcome of the corresponding 'halting problem'. Our result implies that there exists no algorithm to determine whether an arbitrary model is gapped or gapless, and that there exist models for which the presence or absence of a spectral gap is independent of the axioms of mathematics.
A Novel Center Star Multiple Sequence Alignment Algorithm Based on Affine Gap Penalty and K-Band
NASA Astrophysics Data System (ADS)
Zou, Quan; Shan, Xiao; Jiang, Yi
Multiple sequence alignment is one of the most important topics in computational biology, but it cannot deal with the large data so far. As the development of copy-number variant(CNV) and Single Nucleotide Polymorphisms(SNP) research, many researchers want to align numbers of similar sequences for detecting CNV and SNP. In this paper, we propose a novel multiple sequence alignment algorithm based on affine gap penalty and k-band. It can align more quickly and accurately, that will be helpful for mining CNV and SNP. Experiments prove the performance of our algorithm.
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.
New correction procedures for the fast field program which extend its range
NASA Technical Reports Server (NTRS)
West, M.; Sack, R. A.
1990-01-01
A fast field program (FFP) algorithm was developed based on the method of Lee et al., for the prediction of sound pressure level from low frequency, high intensity sources. In order to permit accurate predictions at distances greater than 2 km, new correction procedures have had to be included in the algorithm. Certain functions, whose Hankel transforms can be determined analytically, are subtracted from the depth dependent Green's function. The distance response is then obtained as the sum of these transforms and the Fast Fourier Transformation (FFT) of the residual k dependent function. One procedure, which permits the elimination of most complex exponentials, has allowed significant changes in the structure of the FFP algorithm, which has resulted in a substantial reduction in computation time.
Mission-oriented requirements for updating MIL-H-8501: Calspan proposed structure and rationale
NASA Technical Reports Server (NTRS)
Chalk, C. R.; Radford, R. C.
1985-01-01
This report documents the effort by Arvin/Calspan Corporation to formulate a revision of MIL-H-8501A in terms of Mission-Oriented Flying Qualities Requirements for Military Rotorcraft. Emphasis is placed on development of a specification structure which will permit addressing Operational Missions and Flight Phases, Flight Regions, Classification of Required Operational Capability, Categorization of Flight Phases, and Levels of Flying Qualities. A number of definitions is established to permit addressing the rotorcraft state, flight envelopes, environments, and the conditions under which degraded flying qualities are permitted. Tentative requirements are drafted for Required Operational Capability Class 1. Also included is a Background Information and Users Guide for the draft specification structure proposed for the MIL-H-8501A revision. The report also contains a discussion of critical data gaps and attempts to prioritize these data gaps and to suggest experiments that should be performed to generate data needed to support formulation of quantitative design criteria for the additional Operational Capability Classes 2, 3, and 4.
Complexity of the Quantum Adiabatic Algorithm
NASA Astrophysics Data System (ADS)
Hen, Itay
2013-03-01
The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorihms. Here, we discuss several aspects of the quantum adiabatic algorithm: We analyze the efficiency of the algorithm on several ``hard'' (NP) computational problems. Studying the size dependence of the typical minimum energy gap of the Hamiltonians of these problems using quantum Monte Carlo methods, we find that while for most problems the minimum gap decreases exponentially with the size of the problem, indicating that the QAA is not more efficient than existing classical search algorithms, for other problems there is evidence to suggest that the gap may be polynomial near the phase transition. We also discuss applications of the QAA to ``real life'' problems and how they can be implemented on currently available (albeit prototypical) quantum hardware such as ``D-Wave One'', that impose serious restrictions as to which type of problems may be tested. Finally, we discuss different approaches to find improved implementations of the algorithm such as local adiabatic evolution, adaptive methods, local search in Hamiltonian space and others.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
... priority allocation algorithm for the SPXPM option class,\\5\\ subject to certain conditions. \\5\\ SPXPM is... algorithm in effect for the class, subject to various conditions set forth in subparagraphs (b)(3)(A... permit the allocation algorithm in effect for AIM in the SPXPM option class to be the price-time priority...
A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements
NASA Astrophysics Data System (ADS)
Vlasenko, Andrey; Steele, Edward C. C.; Nimmo-Smith, W. Alex M.
2015-06-01
The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements.
Reforms that Could Help Narrow the Achievement Gap. Policy Perspectives
ERIC Educational Resources Information Center
Rothstein, Richard
2006-01-01
Americans have concluded that the achievement gap is the fault of "failing schools" because it makes no common sense that it could be otherwise. After all, how much money a family has, or a child's skin color, should not influence how well that child learns to read. If teachers know how to teach and if schools permit no distractions, children…
ecode - Electron Transport Algorithm Testing v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene
2016-10-05
ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less
NASA Astrophysics Data System (ADS)
Pelicano, Christian Mark; Rapadas, Nick; Cagatan, Gerard; Magdaluyo, Eduardo
2017-12-01
Herein, the crystallite size and band gap energy of zinc oxide (ZnO) quantum dots were predicted using artificial neural network (ANN). Three input factors including reagent ratio, growth time, and growth temperature were examined with respect to crystallite size and band gap energy as response factors. The generated results from neural network model were then compared with the experimental results. Experimental crystallite size and band gap energy of ZnO quantum dots were measured from TEM images and absorbance spectra, respectively. The Levenberg-Marquardt (LM) algorithm was used as the learning algorithm for the ANN model. The performance of the ANN model was then assessed through mean square error (MSE) and regression values. Based on the results, the ANN modelling results are in good agreement with the experimental data.
Validation of Splicing Events in Transcriptome Sequencing Data
Kaisers, Wolfgang; Ptok, Johannes; Schwender, Holger; Schaal, Heiner
2017-01-01
Genomic alignments of sequenced cellular messenger RNA contain gapped alignments which are interpreted as consequence of intron removal. The resulting gap-sites, genomic locations of alignment gaps, are landmarks representing potential splice-sites. As alignment algorithms report gap-sites with a considerable false discovery rate, validations are required. We describe two quality scores, gap quality score (gqs) and weighted gap information score (wgis), developed for validation of putative splicing events: While gqs solely relies on alignment data wgis additionally considers information from the genomic sequence. FASTQ files obtained from 54 human dermal fibroblast samples were aligned against the human genome (GRCh38) using TopHat and STAR aligner. Statistical properties of gap-sites validated by gqs and wgis were evaluated by their sequence similarity to known exon-intron borders. Within the 54 samples, TopHat identifies 1,000,380 and STAR reports 6,487,577 gap-sites. Due to the lack of strand information, however, the percentage of identified GT-AG gap-sites is rather low. While gap-sites from TopHat contain ≈89% GT-AG, gap-sites from STAR only contain ≈42% GT-AG dinucleotide pairs in merged data from 54 fibroblast samples. Validation with gqs yields 156,251 gap-sites from TopHat alignments and 166,294 from STAR alignments. Validation with wgis yields 770,327 gap-sites from TopHat alignments and 1,065,596 from STAR alignments. Both alignment algorithms, TopHat and STAR, report gap-sites with considerable false discovery rate, which can drastically be reduced by validation with gqs and wgis. PMID:28545234
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
A possible pole problem in the formula for klystron gap fields
NASA Technical Reports Server (NTRS)
Kosmahl, H. G.
1977-01-01
In isolated cases a pole may be encountered in a previously published solution for the fields in a klystron gap. Formulas, permitting the critical combinations of parameters to be defined, are presented. It is noted that the region of inaccuracy surrounding the pole is sufficiently small and that a 0.1% change in the field changing parameter is enough to avoid it.
GASP: Gapped Ancestral Sequence Prediction for proteins
Edwards, Richard J; Shields, Denis C
2004-01-01
Background The prediction of ancestral protein sequences from multiple sequence alignments is useful for many bioinformatics analyses. Predicting ancestral sequences is not a simple procedure and relies on accurate alignments and phylogenies. Several algorithms exist based on Maximum Parsimony or Maximum Likelihood methods but many current implementations are unable to process residues with gaps, which may represent insertion/deletion (indel) events or sequence fragments. Results Here we present a new algorithm, GASP (Gapped Ancestral Sequence Prediction), for predicting ancestral sequences from phylogenetic trees and the corresponding multiple sequence alignments. Alignments may be of any size and contain gaps. GASP first assigns the positions of gaps in the phylogeny before using a likelihood-based approach centred on amino acid substitution matrices to assign ancestral amino acids. Important outgroup information is used by first working down from the tips of the tree to the root, using descendant data only to assign probabilities, and then working back up from the root to the tips using descendant and outgroup data to make predictions. GASP was tested on a number of simulated datasets based on real phylogenies. Prediction accuracy for ungapped data was similar to three alternative algorithms tested, with GASP performing better in some cases and worse in others. Adding simple insertions and deletions to the simulated data did not have a detrimental effect on GASP accuracy. Conclusions GASP (Gapped Ancestral Sequence Prediction) will predict ancestral sequences from multiple protein alignments of any size. Although not as accurate in all cases as some of the more sophisticated maximum likelihood approaches, it can process a wide range of input phylogenies and will predict ancestral sequences for gapped and ungapped residues alike. PMID:15350199
Automatic Classification Using Supervised Learning in a Medical Document Filtering Application.
ERIC Educational Resources Information Center
Mostafa, J.; Lam, W.
2000-01-01
Presents a multilevel model of the information filtering process that permits document classification. Evaluates a document classification approach based on a supervised learning algorithm, measures the accuracy of the algorithm in a neural network that was trained to classify medical documents on cell biology, and discusses filtering…
Edge-following algorithm for tracking geological features
NASA Technical Reports Server (NTRS)
Tietz, J. C.
1977-01-01
Sequential edge-tracking algorithm employs circular scanning to point permit effective real-time tracking of coastlines and rivers from earth resources satellites. Technique eliminates expensive high-resolution cameras. System might also be adaptable for application in monitoring automated assembly lines, inspecting conveyor belts, or analyzing thermographs, or x ray images.
Quantum adiabatic computation with a constant gap is not useful in one dimension.
Hastings, M B
2009-07-31
We show that it is possible to use a classical computer to efficiently simulate the adiabatic evolution of a quantum system in one dimension with a constant spectral gap, starting the adiabatic evolution from a known initial product state. The proof relies on a recently proven area law for such systems, implying the existence of a good matrix product representation of the ground state, combined with an appropriate algorithm to update the matrix product state as the Hamiltonian is changed. This implies that adiabatic evolution with such Hamiltonians is not useful for universal quantum computation. Therefore, adiabatic algorithms which are useful for universal quantum computation either require a spectral gap tending to zero or need to be implemented in more than one dimension (we leave open the question of the computational power of adiabatic simulation with a constant gap in more than one dimension).
NASA Astrophysics Data System (ADS)
de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.
In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.
Performance Analysis of Different Backoff Algorithms for WBAN-Based Emerging Sensor Networks
Khan, Pervez; Ullah, Niamat; Ali, Farman; Ullah, Sana; Hong, Youn-Sik; Lee, Ki-Young; Kim, Hoon
2017-01-01
The Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) procedure of IEEE 802.15.6 Medium Access Control (MAC) protocols for the Wireless Body Area Network (WBAN) use an Alternative Binary Exponential Backoff (ABEB) procedure. The backoff algorithm plays an important role to avoid collision in wireless networks. The Binary Exponential Backoff (BEB) algorithm used in different standards does not obtain the optimum performance due to enormous Contention Window (CW) gaps induced from packet collisions. Therefore, The IEEE 802.15.6 CSMA/CA has developed the ABEB procedure to avoid the large CW gaps upon each collision. However, the ABEB algorithm may lead to a high collision rate (as the CW size is incremented on every alternative collision) and poor utilization of the channel due to the gap between the subsequent CW. To minimize the gap between subsequent CW sizes, we adopted the Prioritized Fibonacci Backoff (PFB) procedure. This procedure leads to a smooth and gradual increase in the CW size, after each collision, which eventually decreases the waiting time, and the contending node can access the channel promptly with little delay; while ABEB leads to irregular and fluctuated CW values, which eventually increase collision and waiting time before a re-transmission attempt. We analytically approach this problem by employing a Markov chain to design the PFB scheme for the CSMA/CA procedure of the IEEE 80.15.6 standard. The performance of the PFB algorithm is compared against the ABEB function of WBAN CSMA/CA. The results show that the PFB procedure adopted for IEEE 802.15.6 CSMA/CA outperforms the ABEB procedure. PMID:28257112
Dynamic programming algorithms for biological sequence comparison.
Pearson, W R; Miller, W
1992-01-01
Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.
Evaluating ACLS Algorithms for the International Space Station (ISS) - A Paradigm Revisited
NASA Technical Reports Server (NTRS)
Alexander, Dave; Brandt, Keith; Locke, James; Hurst, Victor, IV; Mack, Michael D.; Pettys, Marianne; Smart, Kieran
2007-01-01
The ISS may have communication gaps of up to 45 minutes during each orbit and therefore it is imperative to have medical protocols, including an effective ACLS algorithm, that can be reliably autonomously executed during flight. The aim of this project was to compare the effectiveness of the current ACLS algorithm with an improved algorithm having a new navigation format.
Scanning wind-vector scatterometers with two pencil beams
NASA Technical Reports Server (NTRS)
Kirimoto, T.; Moore, R. K.
1984-01-01
A scanning pencil-beam scatterometer for ocean windvector determination has potential advantages over the fan-beam systems used and proposed heretofore. The pencil beam permits use of lower transmitter power, and at the same time allows concurrent use of the reflector by a radiometer to correct for atmospheric attenuation and other radiometers for other purposes. The use of dual beams based on the same scanning reflector permits four looks at each cell on the surface, thereby improving accuracy and allowing alias removal. Simulation results for a spaceborne dual-beam scanning scatterometer with a 1-watt radiated power at an orbital altitude of 900 km is described. Two novel algorithms for removing the aliases in the windvector are described, in addition to an adaptation of the conventional maximum likelihood algorithm. The new algorithms are more effective at alias removal than the conventional one. Measurement errors for the wind speed, assuming perfect alias removal, were found to be less than 10%.
ARYANA: Aligning Reads by Yet Another Approach
2014-01-01
Motivation Although there are many different algorithms and software tools for aligning sequencing reads, fast gapped sequence search is far from solved. Strong interest in fast alignment is best reflected in the $106 prize for the Innocentive competition on aligning a collection of reads to a given database of reference genomes. In addition, de novo assembly of next-generation sequencing long reads requires fast overlap-layout-concensus algorithms which depend on fast and accurate alignment. Contribution We introduce ARYANA, a fast gapped read aligner, developed on the base of BWA indexing infrastructure with a completely new alignment engine that makes it significantly faster than three other aligners: Bowtie2, BWA and SeqAlto, with comparable generality and accuracy. Instead of the time-consuming backtracking procedures for handling mismatches, ARYANA comes with the seed-and-extend algorithmic framework and a significantly improved efficiency by integrating novel algorithmic techniques including dynamic seed selection, bidirectional seed extension, reset-free hash tables, and gap-filling dynamic programming. As the read length increases ARYANA's superiority in terms of speed and alignment rate becomes more evident. This is in perfect harmony with the read length trend as the sequencing technologies evolve. The algorithmic platform of ARYANA makes it easy to develop mission-specific aligners for other applications using ARYANA engine. Availability ARYANA with complete source code can be obtained from http://github.com/aryana-aligner PMID:25252881
ARYANA: Aligning Reads by Yet Another Approach.
Gholami, Milad; Arbabi, Aryan; Sharifi-Zarchi, Ali; Chitsaz, Hamidreza; Sadeghi, Mehdi
2014-01-01
Although there are many different algorithms and software tools for aligning sequencing reads, fast gapped sequence search is far from solved. Strong interest in fast alignment is best reflected in the $10(6) prize for the Innocentive competition on aligning a collection of reads to a given database of reference genomes. In addition, de novo assembly of next-generation sequencing long reads requires fast overlap-layout-concensus algorithms which depend on fast and accurate alignment. We introduce ARYANA, a fast gapped read aligner, developed on the base of BWA indexing infrastructure with a completely new alignment engine that makes it significantly faster than three other aligners: Bowtie2, BWA and SeqAlto, with comparable generality and accuracy. Instead of the time-consuming backtracking procedures for handling mismatches, ARYANA comes with the seed-and-extend algorithmic framework and a significantly improved efficiency by integrating novel algorithmic techniques including dynamic seed selection, bidirectional seed extension, reset-free hash tables, and gap-filling dynamic programming. As the read length increases ARYANA's superiority in terms of speed and alignment rate becomes more evident. This is in perfect harmony with the read length trend as the sequencing technologies evolve. The algorithmic platform of ARYANA makes it easy to develop mission-specific aligners for other applications using ARYANA engine. ARYANA with complete source code can be obtained from http://github.com/aryana-aligner.
Simulation of Automated Vehicles' Drive Cycles
DOT National Transportation Integrated Search
2018-02-28
This research has two objectives: 1) To develop algorithms for plausible and legally-justifiable freeway car-following and arterial-street gap acceptance driving behavior for AVs 2) To implement these algorithms on a representative road network, in o...
Experimental realization of a one-way quantum computer algorithm solving Simon's problem.
Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G
2014-11-14
We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.
An Algorithm for the Calculation of Exact Term Discrimination Values.
ERIC Educational Resources Information Center
Willett, Peter
1985-01-01
Reports algorithm for calculation of term discrimination values that is sufficiently fast in operation to permit use of exact values. Evidence is presented to show that relationship between term discrimination and term frequency is crucially dependent upon type of inter-document similarity measure used for calculation of discrimination values. (13…
Expedient Gap Definition Using 3D LADAR
2006-09-01
Research and Development Center (ERDC), ASI has developed an algorithm to reduce the 3D point cloud acquired with the LADAR system into sets of 2D...ATO IV.GC.2004.02. The GAP Program is conducted by the U.S. Army Engineer Research and Development Center (ERDC) in conjunction with the U.S. Army...Introduction 1 1 Introduction Background The Battlespace Gap Definition and Defeat ( GAP ) Program is conducted by the U.S. Army Engineer Research and
Adaptive kernel regression for freehand 3D ultrasound reconstruction
NASA Astrophysics Data System (ADS)
Alshalalfah, Abdel-Latif; Daoud, Mohammad I.; Al-Najar, Mahasen
2017-03-01
Freehand three-dimensional (3D) ultrasound imaging enables low-cost and flexible 3D scanning of arbitrary-shaped organs, where the operator can freely move a two-dimensional (2D) ultrasound probe to acquire a sequence of tracked cross-sectional images of the anatomy. Often, the acquired 2D ultrasound images are irregularly and sparsely distributed in the 3D space. Several 3D reconstruction algorithms have been proposed to synthesize 3D ultrasound volumes based on the acquired 2D images. A challenging task during the reconstruction process is to preserve the texture patterns in the synthesized volume and ensure that all gaps in the volume are correctly filled. This paper presents an adaptive kernel regression algorithm that can effectively reconstruct high-quality freehand 3D ultrasound volumes. The algorithm employs a kernel regression model that enables nonparametric interpolation of the voxel gray-level values. The kernel size of the regression model is adaptively adjusted based on the characteristics of the voxel that is being interpolated. In particular, when the algorithm is employed to interpolate a voxel located in a region with dense ultrasound data samples, the size of the kernel is reduced to preserve the texture patterns. On the other hand, the size of the kernel is increased in areas that include large gaps to enable effective gap filling. The performance of the proposed algorithm was compared with seven previous interpolation approaches by synthesizing freehand 3D ultrasound volumes of a benign breast tumor. The experimental results show that the proposed algorithm outperforms the other interpolation approaches.
Giancarlo, Raffaele; Scaturro, Davide; Utro, Filippo
2008-10-29
Inferring cluster structure in microarray datasets is a fundamental task for the so-called -omic sciences. It is also a fundamental question in Statistics, Data Analysis and Classification, in particular with regard to the prediction of the number of clusters in a dataset, usually established via internal validation measures. Despite the wealth of internal measures available in the literature, new ones have been recently proposed, some of them specifically for microarray data. We consider five such measures: Clest, Consensus (Consensus Clustering), FOM (Figure of Merit), Gap (Gap Statistics) and ME (Model Explorer), in addition to the classic WCSS (Within Cluster Sum-of-Squares) and KL (Krzanowski and Lai index). We perform extensive experiments on six benchmark microarray datasets, using both Hierarchical and K-means clustering algorithms, and we provide an analysis assessing both the intrinsic ability of a measure to predict the correct number of clusters in a dataset and its merit relative to the other measures. We pay particular attention both to precision and speed. Moreover, we also provide various fast approximation algorithms for the computation of Gap, FOM and WCSS. The main result is a hierarchy of those measures in terms of precision and speed, highlighting some of their merits and limitations not reported before in the literature. Based on our analysis, we draw several conclusions for the use of those internal measures on microarray data. We report the main ones. Consensus is by far the best performer in terms of predictive power and remarkably algorithm-independent. Unfortunately, on large datasets, it may be of no use because of its non-trivial computer time demand (weeks on a state of the art PC). FOM is the second best performer although, quite surprisingly, it may not be competitive in this scenario: it has essentially the same predictive power of WCSS but it is from 6 to 100 times slower in time, depending on the dataset. The approximation algorithms for the computation of FOM, Gap and WCSS perform very well, i.e., they are faster while still granting a very close approximation of FOM and WCSS. The approximation algorithm for the computation of Gap deserves to be singled-out since it has a predictive power far better than Gap, it is competitive with the other measures, but it is at least two order of magnitude faster in time with respect to Gap. Another important novel conclusion that can be drawn from our analysis is that all the measures we have considered show severe limitations on large datasets, either due to computational demand (Consensus, as already mentioned, Clest and Gap) or to lack of precision (all of the other measures, including their approximations). The software and datasets are available under the GNU GPL on the supplementary material web page.
Mind the Gaps: Controversies about Algorithms, Learning and Trendy Knowledge
ERIC Educational Resources Information Center
Argenton, Gerald
2017-01-01
This article critically explores the ways by which the Web could become a more learning-oriented medium in the age of, but also in spite of, the newly bred algorithmic cultures. The social dimension of algorithms is reported in literature as being a socio-technological entanglement that has a powerful influence on users' practices and their lived…
A stochastic conflict resolution model for trading pollutant discharge permits in river systems.
Niksokhan, Mohammad Hossein; Kerachian, Reza; Amin, Pedram
2009-07-01
This paper presents an efficient methodology for developing pollutant discharge permit trading in river systems considering the conflict of interests of involving decision-makers and the stakeholders. In this methodology, a trade-off curve between objectives is developed using a powerful and recently developed multi-objective genetic algorithm technique known as the Nondominated Sorting Genetic Algorithm-II (NSGA-II). The best non-dominated solution on the trade-off curve is defined using the Young conflict resolution theory, which considers the utility functions of decision makers and stakeholders of the system. These utility functions are related to the total treatment cost and a fuzzy risk of violating the water quality standards. The fuzzy risk is evaluated using the Monte Carlo analysis. Finally, an optimization model provides the trading discharge permit policies. The practical utility of the proposed methodology in decision-making is illustrated through a realistic example of the Zarjub River in the northern part of Iran.
MIMO: an efficient tool for molecular interaction maps overlap
2013-01-01
Background Molecular pathways represent an ensemble of interactions occurring among molecules within the cell and between cells. The identification of similarities between molecular pathways across organisms and functions has a critical role in understanding complex biological processes. For the inference of such novel information, the comparison of molecular pathways requires to account for imperfect matches (flexibility) and to efficiently handle complex network topologies. To date, these characteristics are only partially available in tools designed to compare molecular interaction maps. Results Our approach MIMO (Molecular Interaction Maps Overlap) addresses the first problem by allowing the introduction of gaps and mismatches between query and template pathways and permits -when necessary- supervised queries incorporating a priori biological information. It then addresses the second issue by relying directly on the rich graph topology described in the Systems Biology Markup Language (SBML) standard, and uses multidigraphs to efficiently handle multiple queries on biological graph databases. The algorithm has been here successfully used to highlight the contact point between various human pathways in the Reactome database. Conclusions MIMO offers a flexible and efficient graph-matching tool for comparing complex biological pathways. PMID:23672344
Analysis of sequencing and scheduling methods for arrival traffic
NASA Technical Reports Server (NTRS)
Neuman, Frank; Erzberger, Heinz
1990-01-01
The air traffic control subsystem that performs scheduling is discussed. The function of the scheduling algorithms is to plan automatically the most efficient landing order and to assign optimally spaced landing times to all arrivals. Several important scheduling algorithms are described and the statistical performance of the scheduling algorithms is examined. Scheduling brings order to an arrival sequence for aircraft. First-come-first-served scheduling (FCFS) establishes a fair order, based on estimated times of arrival, and determines proper separations. Because of the randomness of the traffic, gaps will remain in the scheduled sequence of aircraft. These gaps are filled, or partially filled, by time-advancing the leading aircraft after a gap while still preserving the FCFS order. Tightly scheduled groups of aircraft remain with a mix of heavy and large aircraft. Separation requirements differ for different types of aircraft trailing each other. Advantage is taken of this fact through mild reordering of the traffic, thus shortening the groups and reducing average delays. Actual delays for different samples with the same statistical parameters vary widely, especially for heavy traffic.
An analytical algorithm for 3D magnetic field mapping of a watt balance magnet
NASA Astrophysics Data System (ADS)
Fu, Zhuang; Zhang, Zhonghua; Li, Zhengkun; Zhao, Wei; Han, Bing; Lu, Yunfeng; Li, Shisong
2016-04-01
A yoke-based permanent magnet, which has been employed in many watt balances at national metrology institutes, is supposed to generate strong and uniform magnetic field in an air gap in the radial direction. However, in reality the fringe effect due to the finite height of the air gap will introduce an undesired vertical magnetic component to the air gap, which should either be measured or modeled towards some optimizations of the watt balance. A recent publication, i.e. Li et al (2015 Metrologia 52 445), presented a full field mapping method, which in theory will supply useful information for profile characterization and misalignment analysis. This article is an additional material of Li et al (2015 Metrologia 52 445), which develops a different analytical algorithm to represent the 3D magnetic field of a watt balance magnet based on only one measurement for the radial magnetic flux density along the vertical direction, B r (z). The new algorithm is based on the electromagnetic nature of the magnet, which has a much better accuracy.
The combination of direct and paired link graphs can boost repetitive genome assembly
Shi, Wenyu; Ji, Peifeng
2017-01-01
Abstract Currently, most paired link based scaffolding algorithms intrinsically mask the sequences between two linked contigs and bypass their direct link information embedded in the original de Bruijn assembly graph. Such disadvantage substantially complicates the scaffolding process and leads to the inability of resolving repetitive contig assembly. Here we present a novel algorithm, inGAP-sf, for effectively generating high-quality and continuous scaffolds. inGAP-sf achieves this by using a new strategy based on the combination of direct link and paired link graphs, in which direct link is used to increase graph connectivity and to decrease graph complexity and paired link is employed to supervise the traversing process on the direct link graph. Such advantage greatly facilitates the assembly of short-repeat enriched regions. Moreover, a new comprehensive decision model is developed to eliminate the noise routes accompanying with the introduced direct link. Through extensive evaluations on both simulated and real datasets, we demonstrated that inGAP-sf outperforms most of the genome scaffolding algorithms by generating more accurate and continuous assembly, especially for short repetitive regions. PMID:27924003
Physical Models for Particle Tracking Simulations in the RF Gap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shishlo, Andrei P.; Holmes, Jeffrey A.
2015-06-01
This document describes the algorithms that are used in the PyORBIT code to track the particles accelerated in the Radio-Frequency cavities. It gives the mathematical description of the algorithms and the assumptions made in each case. The derived formulas have been implemented in the PyORBIT code. The necessary data for each algorithm are described in detail.
Geometry modeling and grid generation using 3D NURBS control volume
NASA Technical Reports Server (NTRS)
Yu, Tzu-Yi; Soni, Bharat K.; Shih, Ming-Hsin
1995-01-01
The algorithms for volume grid generation using NURBS geometric representation are presented. The parameterization algorithm is enhanced to yield a desired physical distribution on the curve, surface and volume. This approach bridges the gap between CAD surface/volume definition and surface/volume grid generation. Computational examples associated with practical configurations have shown the utilization of these algorithms.
Schofield, A.E.
1958-07-22
A multiple spark gap switch of unique construction is described which will permit controlled, simultaneous discharge of several capacitors into a load. The switch construction includes a disc electrode with a plurality of protuberances of generally convex shape on one surface. A firing electrode is insulatingly supponted In each of the electrode protuberances and extends substantially to the apex thereof. Individual electrodes are disposed on an insulating plate parallel with the disc electrode to form a number of spark gaps with the protuberances. These electrodes are each connected to a separate charged capacitor and when a voltage ls applied simultaneously between the trigger electrodes and the dlsc electrode, each spark gap fires to connect its capacitor to the disc electrode and a subsequent load.
Gómez-Espinosa, Alfonso; Hernández-Guzmán, Víctor M; Bandala-Sánchez, Manuel; Jiménez-Hernández, Hugo; Rivas-Araiza, Edgar A; Rodríguez-Reséndiz, Juvenal; Herrera-Ruíz, Gilberto
2013-03-19
A New Adaptive Self-Tuning Fourier Coefficients Algorithm for Periodic Torque Ripple Minimization in Permanent Magnet Synchronous Motors (PMSM) Torque ripple occurs in Permanent Magnet Synchronous Motors (PMSMs) due to the non-sinusoidal flux density distribution around the air-gap and variable magnetic reluctance of the air-gap due to the stator slots distribution. These torque ripples change periodically with rotor position and are apparent as speed variations, which degrade the PMSM drive performance, particularly at low speeds, because of low inertial filtering. In this paper, a new self-tuning algorithm is developed for determining the Fourier Series Controller coefficients with the aim of reducing the torque ripple in a PMSM, thus allowing for a smoother operation. This algorithm adjusts the controller parameters based on the component's harmonic distortion in time domain of the compensation signal. Experimental evaluation is performed on a DSP-controlled PMSM evaluation platform. Test results obtained validate the effectiveness of the proposed self-tuning algorithm, with the Fourier series expansion scheme, in reducing the torque ripple.
Quantifying uncertainty in read-across assessment – an algorithmic approach - (SOT)
Read-across is a popular data gap filling technique within category and analogue approaches for regulatory purposes. Acceptance of read-across remains an ongoing challenge with several efforts underway for identifying and addressing uncertainties. Here we demonstrate an algorithm...
Elmetwaly, Shereef; Schlick, Tamar
2014-01-01
Graph representations have been widely used to analyze and design various economic, social, military, political, and biological networks. In systems biology, networks of cells and organs are useful for understanding disease and medical treatments and, in structural biology, structures of molecules can be described, including RNA structures. In our RNA-As-Graphs (RAG) framework, we represent RNA structures as tree graphs by translating unpaired regions into vertices and helices into edges. Here we explore the modularity of RNA structures by applying graph partitioning known in graph theory to divide an RNA graph into subgraphs. To our knowledge, this is the first application of graph partitioning to biology, and the results suggest a systematic approach for modular design in general. The graph partitioning algorithms utilize mathematical properties of the Laplacian eigenvector (µ2) corresponding to the second eigenvalues (λ2) associated with the topology matrix defining the graph: λ2 describes the overall topology, and the sum of µ2′s components is zero. The three types of algorithms, termed median, sign, and gap cuts, divide a graph by determining nodes of cut by median, zero, and largest gap of µ2′s components, respectively. We apply these algorithms to 45 graphs corresponding to all solved RNA structures up through 11 vertices (∼220 nucleotides). While we observe that the median cut divides a graph into two similar-sized subgraphs, the sign and gap cuts partition a graph into two topologically-distinct subgraphs. We find that the gap cut produces the best biologically-relevant partitioning for RNA because it divides RNAs at less stable connections while maintaining junctions intact. The iterative gap cuts suggest basic modules and assembly protocols to design large RNA structures. Our graph substructuring thus suggests a systematic approach to explore the modularity of biological networks. In our applications to RNA structures, subgraphs also suggest design strategies for novel RNA motifs. PMID:25188578
Tunable graded rod laser assembly
NASA Technical Reports Server (NTRS)
AuYeung, John C. (Inventor)
1985-01-01
A tunable laser assembly including a pair of radially graded indexed optical segments aligned to focus the laser to form an external resonant cavity with an optical axis, the respective optical segments are retativity moveable along the optical axis and provide a variable et aion gap sufficient to permit variable tuning of the laser wavelength without altering the effective length of the resonant cavity. The gap also include a saturable absorbing material providing a passive mode-locking of the laser.
Inverted File Compression through Document Identifier Reassignment.
ERIC Educational Resources Information Center
Shieh, Wann-Yun; Chen, Tien-Fu; Shann, Jean Jyh-Jiun; Chung, Chung-Ping
2003-01-01
Discusses the use of inverted files in information retrieval systems and proposes a document identifier reassignment method to reduce the average gap values in an inverted file. Highlights include the d-gap technique; document similarity; heuristic algorithms; file compression; and performance evaluation from a simulation environment. (LRW)
MHD Turbulence, div B = 0 and Lattice Boltzmann Simulations
NASA Astrophysics Data System (ADS)
Phillips, Nate; Keating, Brian; Vahala, George; Vahala, Linda
2006-10-01
The question of div B = 0 in MHD simulations is a crucial issue. Here we consider lattice Boltzmann simulations for MHD (LB-MHD). One introduces a scalar distribution function for the velocity field and a vector distribution function for the magnetic field. This asymmetry is due to the different symmetries in the tensors arising in the time evolution of these fields. The simple algorithm of streaming and local collisional relaxation is ideally parallelized and vectorized -- leading to the best sustained performance/PE of any code run on the Earth Simulator. By reformulating the BGK collision term, a simple implicit algorithm can be immediately transformed into an explicit algorithm that permits simulations at quite low viscosity and resistivity. However the div B is not an imposed constraint. Currently we are examining a new formulations of LB-MHD that impose the div B constraint -- either through an entropic like formulation or by introducing forcing terms into the momentum equations and permitting simpler forms of relaxation distributions.
NASA Technical Reports Server (NTRS)
Savage, M.; Mackulin, M. J.; Coe, H. H.; Coy, J. J.
1991-01-01
Optimization procedures allow one to design a spur gear reduction for maximum life and other end use criteria. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial guess values. The optimization algorithm is described, and the models for gear life and performance are presented. The algorithm is compact and has been programmed for execution on a desk top computer. Two examples are presented to illustrate the method and its application.
NASA Technical Reports Server (NTRS)
Kato, S.; Smith, G. L.; Barker, H. W.
2001-01-01
An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.
A traveling-salesman-based approach to aircraft scheduling in the terminal area
NASA Technical Reports Server (NTRS)
Luenberger, Robert A.
1988-01-01
An efficient algorithm is presented, based on the well-known algorithm for the traveling salesman problem, for scheduling aircraft arrivals into major terminal areas. The algorithm permits, but strictly limits, reassigning an aircraft from its initial position in the landing order. This limitation is needed so that no aircraft or aircraft category is unduly penalized. Results indicate, for the mix of arrivals investigated, a potential increase in capacity in the 3 to 5 percent range. Furthermore, it is shown that the computation time for the algorithm grows only linearly with problem size.
Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation
NASA Astrophysics Data System (ADS)
Quiroz, Gregory
Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.
Gómez-Espinosa, Alfonso; Hernández-Guzmán, Víctor M.; Bandala-Sánchez, Manuel; Jiménez-Hernández, Hugo; Rivas-Araiza, Edgar A.; Rodríguez-Reséndiz, Juvenal; Herrera-Ruíz, Gilberto
2013-01-01
Torque ripple occurs in Permanent Magnet Synchronous Motors (PMSMs) due to the non-sinusoidal flux density distribution around the air-gap and variable magnetic reluctance of the air-gap due to the stator slots distribution. These torque ripples change periodically with rotor position and are apparent as speed variations, which degrade the PMSM drive performance, particularly at low speeds, because of low inertial filtering. In this paper, a new self-tuning algorithm is developed for determining the Fourier Series Controller coefficients with the aim of reducing the torque ripple in a PMSM, thus allowing for a smoother operation. This algorithm adjusts the controller parameters based on the component's harmonic distortion in time domain of the compensation signal. Experimental evaluation is performed on a DSP-controlled PMSM evaluation platform. Test results obtained validate the effectiveness of the proposed self-tuning algorithm, with the Fourier series expansion scheme, in reducing the torque ripple. PMID:23519345
Bhaskar, Anand; Javanmard, Adel; Courtade, Thomas A; Tse, David
2017-03-15
Genetic variation in human populations is influenced by geographic ancestry due to spatial locality in historical mating and migration patterns. Spatial population structure in genetic datasets has been traditionally analyzed using either model-free algorithms, such as principal components analysis (PCA) and multidimensional scaling, or using explicit spatial probabilistic models of allele frequency evolution. We develop a general probabilistic model and an associated inference algorithm that unify the model-based and data-driven approaches to visualizing and inferring population structure. Our spatial inference algorithm can also be effectively applied to the problem of population stratification in genome-wide association studies (GWAS), where hidden population structure can create fictitious associations when population ancestry is correlated with both the genotype and the trait. Our algorithm Geographic Ancestry Positioning (GAP) relates local genetic distances between samples to their spatial distances, and can be used for visually discerning population structure as well as accurately inferring the spatial origin of individuals on a two-dimensional continuum. On both simulated and several real datasets from diverse human populations, GAP exhibits substantially lower error in reconstructing spatial ancestry coordinates compared to PCA. We also develop an association test that uses the ancestry coordinates inferred by GAP to accurately account for ancestry-induced correlations in GWAS. Based on simulations and analysis of a dataset of 10 metabolic traits measured in a Northern Finland cohort, which is known to exhibit significant population structure, we find that our method has superior power to current approaches. Our software is available at https://github.com/anand-bhaskar/gap . abhaskar@stanford.edu or ajavanma@usc.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Large Advanced Space Systems (LASS) computer-aided design program additions
NASA Technical Reports Server (NTRS)
Farrell, C. E.
1982-01-01
The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.
Analysis of Rhythms in Experimental Signals
NASA Astrophysics Data System (ADS)
Desherevskii, A. V.; Zhuravlev, V. I.; Nikolsky, A. N.; Sidorin, A. Ya.
2017-12-01
We compare algorithms designed to extract quasiperiodic components of a signal and estimate the amplitude, phase, stability, and other characteristics of a rhythm in a sliding window in the presence of data gaps. Each algorithm relies on its own rhythm model; therefore, it is necessary to use different algorithms depending on the research objectives. The described set of algorithms and methods is implemented in the WinABD software package, which includes a time-series database management system, a powerful research complex, and an interactive data-visualization environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinclair, Karin; DeGeorge, Elise
The Bald and Golden Eagle Protection Act (BGEPA) prohibits the 'take' of these birds. The act defines take as to 'pursue, shoot, shoot at, poison, wound, kill, capture, trap, collect, destroy, molest or disturb.' The 2009 Eagle Permit Rule (74 FR 46836) authorizes the U.S. Fish and Wildlife Service (USFWS) to issue nonpurposeful (i.e., incidental) take permits, and the USFWS 2013 Eagle Conservation Plan Guidance provides a voluntary framework for issuing programmatic take permits to wind facilities that incorporate scientifically supportable advanced conservation practices (ACPs). Under these rules, the Service can issue permits that authorize individual instances of take ofmore » bald and golden eagles when the take is associated with, but not the purpose of, an otherwise lawful activity, and cannot practicably be avoided. To date, the USFWS has not approved any ACPs, citing the lack of evidence for 'scientifically supportable measures.' The Eagle Detection and Deterrents Research Gaps and Solutions Workshop was convened at the National Renewable Energy Laboratory in December 2015 with a goal to comprehensively assess the current state of technologies to detect and deter eagles from wind energy sites and the key gaps concerning reducing eagle fatalities and facilitating permitting under the BGEPA. During the workshop, presentations and discussions focused primarily on existing knowledge (and limitations) about the biology of eagles as well as technologies and emerging or novel ideas, including innovative applications of tools developed for use in other sectors, such as the U.S. Department of Defense and aviation. The main activity of the workshop was the breakout sessions, which focused on the current state of detection and deterrent technologies and novel concepts/applications for detecting and minimizing eagle collisions with wind turbines. Following the breakout sessions, participants were asked about their individual impressions of the relative priority of each of the existing and novel ideas.« less
Widesott, Lamberto; Lorentini, Stefano; Fracchiolla, Francesco; Farace, Paolo; Schwarz, Marco
2018-05-04
validation of a commercial Monte Carlo (MC) algorithm (RayStation ver6.0.024) for the treatment of brain tumours with pencil beam scanning (PBS) proton therapy, comparing it via measurements and analytical calculations in clinically realistic scenarios. Methods: For the measurements a 2D ion chamber array detector (MatriXX PT)) was placed underneath the following targets: 1) anthropomorphic head phantom (with two different thickness) and 2) a biological sample (i.e. half lamb's head). In addition, we compared the MC dose engine vs. the RayStation pencil beam (PB) algorithm clinically implemented so far, in critical conditions such as superficial targets (i.e. in need of range shifter), different air gaps and gantry angles to simulate both orthogonal and tangential beam arrangements. For every plan the PB and MC dose calculation were compared to measurements using a gamma analysis metrics (3%, 3mm). Results: regarding the head phantom the gamma passing rate (GPR) was always >96% and on average > 99% for the MC algorithm; PB algorithm had a GPR ≤90% for all the delivery configurations with single slab (apart 95 % GPR from gantry 0° and small air gap) and in case of two slabs of the head phantom the GPR was >95% only in case of small air gaps for all the three (0°, 45°,and 70°) simulated beam gantry angles. Overall the PB algorithm tends to overestimate the dose to the target (up to 25%) and underestimate the dose to the organ at risk (up to 30%). We found similar results (but a bit worse for PB algorithm) for the two targets of the lamb's head where only two beam gantry angles were simulated. Conclusions: our results suggest that in PBS proton therapy range shifter (RS) need to be used with extreme caution when planning the treatment with an analytical algorithm due to potentially great discrepancies between the planned dose and the dose delivered to the patients, also in case of brain tumours where this issue could be underestimated. Our results also suggest that a MC evaluation of the dose has to be performed every time the RS is used and, mostly, when it is used with large air gaps and beam directions tangential to the patient surface. . © 2018 Institute of Physics and Engineering in Medicine.
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.
Hull, J R
1989-01-01
Coupling a dielectric compound parabolic concentrator (DCPC) to an absorber across a vacuum gap by means of frustrated total internal reflection (FTIR) can theoretically approach the maximum concentration permitted by physical laws, thus allowing higher radiative fluxes in thermal applications. The calculated optical performance of 2-D DCPCs with FTIR absorbers indicates that the ratio of gap thickness to optical wavelength must be <0.22 before the optical performance of the DCPC is superior to that of the nondielectric CPC.
Costs and revenues associated with overweight trucks in Indiana.
DOT National Transportation Integrated Search
2012-11-01
This study estimated highway pavement and bridge damage costs, and analyzed the adequacy of permit revenues to cover these : costs. The study began with an extensive review of the literature on the subject, thus facilitating identification of the gap...
Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application
2016-02-26
Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in
A polynomial primal-dual Dikin-type algorithm for linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jansen, B.; Roos, R.; Terlaky, T.
1994-12-31
We present a new primal-dual affine scaling method for linear programming. The search direction is obtained by using Dikin`s original idea: minimize the objective function (which is the duality gap in a primal-dual algorithm) over a suitable ellipsoid. The search direction has no obvious relationship with the directions proposed in the literature so far. It guarantees a significant decrease in the duality gap in each iteration, and at the same time drives the iterates to the central path. The method admits a polynomial complexity bound that is better than the one for Monteiro et al.`s original primal-dual affine scaling method.
Pattern identification in time-course gene expression data with the CoGAPS matrix factorization.
Fertig, Elana J; Stein-O'Brien, Genevieve; Jaffe, Andrew; Colantuoni, Carlo
2014-01-01
Patterns in time-course gene expression data can represent the biological processes that are active over the measured time period. However, the orthogonality constraint in standard pattern-finding algorithms, including notably principal components analysis (PCA), confounds expression changes resulting from simultaneous, non-orthogonal biological processes. Previously, we have shown that Markov chain Monte Carlo nonnegative matrix factorization algorithms are particularly adept at distinguishing such concurrent patterns. One such matrix factorization is implemented in the software package CoGAPS. We describe the application of this software and several technical considerations for identification of age-related patterns in a public, prefrontal cortex gene expression dataset.
Spectral gap optimization of order parameters for sampling complex molecular systems
Tiwary, Pratyush; Berne, B. J.
2016-01-01
In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365
Wang, LiQiang; Li, CuiFeng
2014-10-01
A genetic algorithm (GA) coupled with multiple linear regression (MLR) was used to extract useful features from amino acids and g-gap dipeptides for distinguishing between thermophilic and non-thermophilic proteins. The method was trained by a benchmark dataset of 915 thermophilic and 793 non-thermophilic proteins. The method reached an overall accuracy of 95.4 % in a Jackknife test using nine amino acids, 38 0-gap dipeptides and 29 1-gap dipeptides. The accuracy as a function of protein size ranged between 85.8 and 96.9 %. The overall accuracies of three independent tests were 93, 93.4 and 91.8 %. The observed results of detecting thermophilic proteins suggest that the GA-MLR approach described herein should be a powerful method for selecting features that describe thermostabile machines and be an aid in the design of more stable proteins.
High speed corner and gap-seal computations using an LU-SGS scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.
1989-01-01
The hybrid Lower-Upper Symmetric Gauss-Seidel (LU-SGS) algorithm was added to a widely used series of 2D/3D Euler/Navier-Stokes solvers and was demonstrated for a particular class of high-speed flows. A limited study was conducted to compare the hybrid LU-SGS for approximate Newton iteration and diagonalized Beam-Warming (DBW) schemes on a work and convergence history basis. The hybrid LU-SGS algorithm is more efficient and easier to implement than the DBW scheme originally present in the code for the cases considered. The code was validated for the hypersonic flow through two mutually perpendicular flat plates and then used to investigate the flow field in and around a simplified scramjet module gap seal configuration. Due to the similarities, the gap seal flow was compared to hypersonic corner flow at the same freestream conditions and Reynolds number.
Steganography on quantum pixel images using Shannon entropy
NASA Astrophysics Data System (ADS)
Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.
2016-07-01
This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.
Spatial Aspects of Multi-Sensor Data Fusion: Aerosol Optical Thickness
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory; Zubko, V.; Gopalan, A.
2007-01-01
The Goddard Earth Sciences Data and Information Services Center (GES DISC) investigated the applicability and limitations of combining multi-sensor data through data fusion, to increase the usefulness of the multitude of NASA remote sensing data sets, and as part of a larger effort to integrate this capability in the GES-DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni). This initial study focused on merging daily mean Aerosol Optical Thickness (AOT), as measured by the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Terra and Aqua satellites, to increase spatial coverage and produce complete fields to facilitate comparison with models and station data. The fusion algorithm used the maximum likelihood technique to merge the pixel values where available. The algorithm was applied to two regional AOT subsets (with mostly regular and irregular gaps, respectively) and a set of AOT fields that differed only in the size and location of artificially created gaps. The Cumulative Semivariogram (CSV) was found to be sensitive to the spatial distribution of gap areas and, thus, useful for assessing the sensitivity of the fused data to spatial gaps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eberle, Annika; Bhatt, Arpit; Zhang, Yimin
Advanced biofuel production facilities (biorefineries), such as those envisioned by the United States (U.S.) Renewable Fuel Standard and U.S. Department of Energy's research and development programs, often lack historical air pollutant emissions data, which can pose challenges for obtaining air emission permits that are required for construction and operation. To help fill this knowledge gap, we perform a thorough regulatory analysis and use engineering process designs to assess the applicability of federal air regulations and quantify air pollutant emissions for two feasibility-level biorefinery designs. We find that without additional emission-control technologies both biorefineries would likely be required to obtain majormore » source permits under the Clean Air Act's New Source Review program. The permitting classification (so-called 'major' or 'minor') has implications for the time and effort required for permitting and therefore affects the cost of capital and the fuel selling price. Consequently, we explore additional technically feasible emission-control technologies and process modifications that have the potential to reduce emissions to achieve a minor source permitting classification. Finally, our analysis of air pollutant emissions and controls can assist biorefinery developers with the air permitting process and inform regulatory agencies about potential permitting pathways for novel biorefinery designs.« less
Eberle, Annika; Bhatt, Arpit; Zhang, Yimin; ...
2017-04-26
Advanced biofuel production facilities (biorefineries), such as those envisioned by the United States (U.S.) Renewable Fuel Standard and U.S. Department of Energy's research and development programs, often lack historical air pollutant emissions data, which can pose challenges for obtaining air emission permits that are required for construction and operation. To help fill this knowledge gap, we perform a thorough regulatory analysis and use engineering process designs to assess the applicability of federal air regulations and quantify air pollutant emissions for two feasibility-level biorefinery designs. We find that without additional emission-control technologies both biorefineries would likely be required to obtain majormore » source permits under the Clean Air Act's New Source Review program. The permitting classification (so-called 'major' or 'minor') has implications for the time and effort required for permitting and therefore affects the cost of capital and the fuel selling price. Consequently, we explore additional technically feasible emission-control technologies and process modifications that have the potential to reduce emissions to achieve a minor source permitting classification. Finally, our analysis of air pollutant emissions and controls can assist biorefinery developers with the air permitting process and inform regulatory agencies about potential permitting pathways for novel biorefinery designs.« less
Eberle, Annika; Bhatt, Arpit; Zhang, Yimin; Heath, Garvin
2017-06-06
Advanced biofuel production facilities (biorefineries), such as those envisioned by the United States (U.S.) Renewable Fuel Standard and U.S. Department of Energy's research and development programs, often lack historical air pollutant emissions data, which can pose challenges for obtaining air emission permits that are required for construction and operation. To help fill this knowledge gap, we perform a thorough regulatory analysis and use engineering process designs to assess the applicability of federal air regulations and quantify air pollutant emissions for two feasibility-level biorefinery designs. We find that without additional emission-control technologies both biorefineries would likely be required to obtain major source permits under the Clean Air Act's New Source Review program. The permitting classification (so-called "major" or "minor") has implications for the time and effort required for permitting and therefore affects the cost of capital and the fuel selling price. Consequently, we explore additional technically feasible emission-control technologies and process modifications that have the potential to reduce emissions to achieve a minor source permitting classification. Our analysis of air pollutant emissions and controls can assist biorefinery developers with the air permitting process and inform regulatory agencies about potential permitting pathways for novel biorefinery designs.
Coding and decoding for code division multiple user communication systems
NASA Technical Reports Server (NTRS)
Healy, T. J.
1985-01-01
A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.
Supervised learning of probability distributions by neural networks
NASA Technical Reports Server (NTRS)
Baum, Eric B.; Wilczek, Frank
1988-01-01
Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.
Gap formation by inclined massive planets in locally isothermal three-dimensional discs
NASA Astrophysics Data System (ADS)
Chametla, Raúl O.; Sánchez-Salcedo, F. J.; Masset, F. S.; Hidalgo-Gámez, A. M.
2017-07-01
We study gap formation in gaseous protoplanetary discs by a Jupiter mass planet. The planet's orbit is circular and inclined relative to the mid-plane of the disc. We use the impulse approximation to estimate the gravitational tidal torque between the planet and the disc, and infer the gap profile. For low-mass discs, we provide a criterion for gap opening when the orbital inclination is ≤30°. Using the fargo3d code, we simulate the disc response to an inclined massive planet. The dependence of the depth and width of the gap obtained in the simulations on the inclination of the planet is broadly consistent with the scaling laws derived in the impulse approximation. Although we mainly focus on planets kept on fixed orbits, the formalism permits to infer the temporal evolution of the gap profile in the cases where the inclination of the planet changes with time. This study may be useful to understand the migration of massive planets on inclined orbit, because the strength of the interaction with the disc depends on whether a gap is opened or not.
Informationally Efficient Multi-User Communication
2010-01-01
DSM algorithms, the Op- timal Spectrum Balancing ( OSB ) algorithm and the Iterative Spectrum Balanc- ing (ISB) algorithm, were proposed to solve the...problem of maximization of a weighted rate-sum across all users [CYM06, YL06]. OSB has an exponential complexity in the number of users. ISB only has a...the duality gap min λ1,λ2 D (λ1, λ2) − max P1,P2 f (P1,P2) is not zero. Fig. 3.3 summarizes the three key steps of a dual method, the OSB algorithm
Bridging the semantic gap in sports
NASA Astrophysics Data System (ADS)
Li, Baoxin; Errico, James; Pan, Hao; Sezan, M. Ibrahim
2003-01-01
One of the major challenges facing current media management systems and the related applications is the so-called "semantic gap" between the rich meaning that a user desires and the shallowness of the content descriptions that are automatically extracted from the media. In this paper, we address the problem of bridging this gap in the sports domain. We propose a general framework for indexing and summarizing sports broadcast programs. The framework is based on a high-level model of sports broadcast video using the concept of an event, defined according to domain-specific knowledge for different types of sports. Within this general framework, we develop automatic event detection algorithms that are based on automatic analysis of the visual and aural signals in the media. We have successfully applied the event detection algorithms to different types of sports including American football, baseball, Japanese sumo wrestling, and soccer. Event modeling and detection contribute to the reduction of the semantic gap by providing rudimentary semantic information obtained through media analysis. We further propose a novel approach, which makes use of independently generated rich textual metadata, to fill the gap completely through synchronization of the information-laden textual data with the basic event segments. An MPEG-7 compliant prototype browsing system has been implemented to demonstrate semantic retrieval and summarization of sports video.
Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data
NASA Technical Reports Server (NTRS)
Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan
1997-01-01
A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyewale, S; Pokharel, S; Rana, S
Purpose: To compare the percentage depth dose (PDD) computational accuracy of Adaptive Convolution (AC) and Collapsed Cone Convolution (CCC) algorithms in the presence of air gaps. Methods: A 30×30×30 cm{sup 3} solid water phantom with two 5cm air gaps was scanned with a CT simulator unit and exported into the Phillips Pinnacle™ treatment planning system. PDDs were computed using the AC and CCC algorithms. Photon energy of 6 MV was used with field sizes of 3×3 cm{sup 2}, 5×5 cm{sup 2}, 10×10 cm{sup 2}, 15×15 cm{sup 2}, and 20×20 cm{sup 2}. Ionization chamber readings were taken at different depths inmore » water for all the field sizes. The percentage differences in the PDDs were computed with normalization to the depth of maximum dose (dmax). The calculated PDDs were then compared with measured PDDs. Results: In the first buildup region, both algorithms overpredicted the dose for all field sizes and under-predicted for all other subsequent buildup regions. After dmax in the three water media, AC under-predicted the dose for field sizes 3×3 and 5×5 cm{sup 2} and overpredicted for larger field sizes, whereas CCC under-predicted for all field sizes. Upon traversing the first air gap, AC showed maximum differences of –3.9%, −1.4%, 2.0%, 2.5%, 2.9% and CCC had maximum differences of −3.9%, −3.0%,–3.1%, −2.7%, −1.8% for field sizes 3×3, 5×5, 10×10, 15×15, and 20×20 cm{sup 2} respectively. Conclusion: The effect of air gaps causes a significant difference in the PDDs computed by both the AC and CCC algorithms in secondary build-up regions. AC computed larger values for the PDDs except at smaller field sizes. For CCC, the size of the errors in prediction of the PDDs has an inverse relationship with respect to field size. These effects should be considered in treatment planning where significant air gaps are encountered.« less
On the reliable and flexible solution of practical subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm for solving subset regression problems is described. The algorithm performs a QR decomposition with a new column-pivoting strategy, which permits subset selection directly from the originally defined regression parameters. This, in combination with a number of extensions of the new technique, makes the method a very flexible tool for analyzing subset regression problems in which the parameters have a physical meaning.
Infrared fiber coupled acousto-optic tunable filter spectrometer
NASA Technical Reports Server (NTRS)
Levin, K. H.; Kindler, E.; Ko, T.; Lee, F.; Tran, D. C.; Tapphorn, R. M.
1990-01-01
A spectrometer design is introduced which combines an acoustooptic tunable filter (AOTF) and IR-transmitting flouride-glass fibers. The AOTF crystal is fabricated from TeO2 and permits random access to any wavelength in less than 50 microseconds, and the resulting spectrometer is tested for the remote analysis of gases and hydrocarbons. The AOTF spectrometer, when operated with a high-speed frequency synthesizer and optimized algorithms, permits accurate high-speed spectroscopy in the mid-IR spectral region.
Desbiens, Raphaël; Tremblay, Pierre; Genest, Jérôme; Bouchard, Jean-Pierre
2006-01-20
The instrument line shape (ILS) of a Fourier-transform spectrometer is expressed in a matrix form. For all line shape effects that scale with wavenumber, the ILS matrix is shown to be transposed in the spectral and interferogram domains. The novel representation of the ILS matrix in the interferogram domain yields an insightful physical interpretation of the underlying process producing self-apodization. Working in the interferogram domain circumvents the problem of taking into account the effects of finite optical path difference and permits a proper discretization of the equations. A fast algorithm in O(N log2 N), based on the fractional Fourier transform, is introduced that permits the application of a constant resolving power line shape to theoretical spectra or forward models. The ILS integration formalism is validated with experimental data.
EULER-PCR: finishing experiments for repeat resolution.
Mulyukov, Zufar; Pevzner, Pavel A
2002-01-01
Genomic sequencing typically generates a large collection of unordered contigs or scaffolds. Contig ordering (also known as gap closure) is a non-trivial algorithmic and experimental problem since even relatively simple-to-assemble bacterial genomes typically result in large set of contigs. Neighboring contigs maybe separated either by gaps in read coverage or by repeats. In the later case we say that the contigs are separated by pseudogaps, and we emphasize the important difference between gap closure and pseudogap closure. The existing gap closure approaches do not distinguish between gaps and pseudogaps and treat them in the same way. We describe a new fast strategy for closing pseudogaps (repeat resolution). Since in highly repetitive genomes, the number of pseudogaps may exceed the number of gaps by an order of magnitude, this approach provides a significant advantage over the existing gap closure methods.
NASA Astrophysics Data System (ADS)
Ning, Po; Feng, Zhi-Qiang; Quintero, Juan Antonio Rojas; Zhou, Yang-Jing; Peng, Lei
2018-03-01
This paper deals with elastic and elastic-plastic fretting problems. The wear gap is taken into account along with the initial contact distance to obtain the Signorini conditions. Both the Signorini conditions and the Coulomb friction laws are written in a compact form. Within the bipotential framework, an augmented Lagrangian method is applied to calculate the contact forces. The Archard wear law is then used to calculate the wear gap at the contact surface. The local fretting problems are solved via the Uzawa algorithm. Numerical examples are performed to show the efficiency and accuracy of the proposed approach. The influence of plasticity has been discussed.
gkmSVM: an R package for gapped-kmer SVM
Ghandi, Mahmoud; Mohammad-Noori, Morteza; Ghareghani, Narges; Lee, Dongwon; Garraway, Levi; Beer, Michael A.
2016-01-01
Summary: We present a new R package for training gapped-kmer SVM classifiers for DNA and protein sequences. We describe an improved algorithm for kernel matrix calculation that speeds run time by about 2 to 5-fold over our original gkmSVM algorithm. This package supports several sequence kernels, including: gkmSVM, kmer-SVM, mismatch kernel and wildcard kernel. Availability and Implementation: gkmSVM package is freely available through the Comprehensive R Archive Network (CRAN), for Linux, Mac OS and Windows platforms. The C ++ implementation is available at www.beerlab.org/gkmsvm Contact: mghandi@gmail.com or mbeer@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153639
Choi, Hee Joo; Ribelayga, Christophe P; Mangel, Stuart C
2012-01-12
In addition to chemical synaptic transmission, neurons that are connected by gap junctions can also communicate rapidly via electrical synaptic transmission. Increasing evidence indicates that gap junctions not only permit electrical current flow and synchronous activity between interconnected or coupled cells, but that the strength or effectiveness of electrical communication between coupled cells can be modulated to a great extent(1,2). In addition, the large internal diameter (~1.2 nm) of many gap junction channels permits not only electric current flow, but also the diffusion of intracellular signaling molecules and small metabolites between interconnected cells, so that gap junctions may also mediate metabolic and chemical communication. The strength of gap junctional communication between neurons and its modulation by neurotransmitters and other factors can be studied by simultaneously electrically recording from coupled cells and by determining the extent of diffusion of tracer molecules, which are gap junction permeable, but not membrane permeable, following iontophoretic injection into single cells. However, these procedures can be extremely difficult to perform on neurons with small somata in intact neural tissue. Numerous studies on electrical synapses and the modulation of electrical communication have been conducted in the vertebrate retina, since each of the five retinal neuron types is electrically connected by gap junctions(3,4). Increasing evidence has shown that the circadian (24-hour) clock in the retina and changes in light stimulation regulate gap junction coupling(3-8). For example, recent work has demonstrated that the retinal circadian clock decreases gap junction coupling between rod and cone photoreceptor cells during the day by increasing dopamine D2 receptor activation, and dramatically increases rod-cone coupling at night by reducing D2 receptor activation(7,8). However, not only are these studies extremely difficult to perform on neurons with small somata in intact neural retinal tissue, but it can be difficult to adequately control the illumination conditions during the electrophysiological study of single retinal neurons to avoid light-induced changes in gap junction conductance. Here, we present a straightforward method of determining the extent of gap junction tracer coupling between retinal neurons under different illumination conditions and at different times of the day and night. This cut-loading technique is a modification of scrape loading(9-12), which is based on dye loading and diffusion through open gap junction channels. Scrape loading works well in cultured cells, but not in thick slices such as intact retinas. The cut-loading technique has been used to study photoreceptor coupling in intact fish and mammalian retinas(7, 8,13), and can be used to study coupling between other retinal neurons, as described here.
Orżanowski, Tomasz
2016-01-01
This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.
Pressure algorithm for elliptic flow calculations with the PDF method
NASA Technical Reports Server (NTRS)
Anand, M. S.; Pope, S. B.; Mongia, H. C.
1991-01-01
An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.
The NOAA-NASA CZCS Reanalysis Effort
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; Conkright, Margarita E.; OReilly, John E.; Patt, Frederick S.; Wang, Meng-Hua; Yoder, James; Casey-McCabe, Nancy; Koblinsky, Chester J. (Technical Monitor)
2001-01-01
Satellite observations of global ocean chlorophyll span over two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the NOAA-NASA CZCS Reanalysis (NCR) Effort. NCR consisted of 1) algorithm improvement (AI), where CZCS processing algorithms were improved using modernized atmospheric correction and bio-optical algorithms, and 2) blending, where in situ data were incorporated into the CZCS AI to minimize residual errors. The results indicated major improvement over the previously available CZCS archive. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.
NASA Technical Reports Server (NTRS)
Schultz, Howard
1990-01-01
The retrieval algorithm for spaceborne scatterometry proposed by Schultz (1985) is extended. A circular median filter (CMF) method is presented, which operates on wind directions independently of wind speed, removing any implicit wind speed dependence. A cell weighting scheme is included in the algorithm, permitting greater weights to be assigned to more reliable data. The mathematical properties of the ambiguous solutions to the wind retrieval problem are reviewed. The CMF algorithm is tested on twelve simulated data sets. The effects of spatially correlated likelihood assignment errors on the performance of the CMF algorithm are examined. Also, consideration is given to a wind field smoothing technique that uses a CMF.
Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization
Chiu, Chung-Cheng; Ting, Chih-Chung
2016-01-01
Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412
Ahir, Bhavesh K; Pratten, Margaret K
2014-01-01
Intercellular (cell-to-cell) communication is a crucial and complex mechanism during embryonic heart development. In the cardiovascular system, the beating of the heart is a dynamic and key regulatory process, which is functionally regulated by the coordinated spread of electrical activity through heart muscle cells. Heart tissues are composed of individual cells, each bearing specialized cell surface membrane structures called gap junctions that permit the intercellular exchange of ions and low molecular weight molecules. Gap junction channels are essential in normal heart function and they assist in the mediated spread of electrical impulses that stimulate synchronized contraction (via an electrical syncytium) of cardiac tissues. This present review describes the current knowledge of gap junction biology. In the first part, we summarise some relevant biochemical and physiological properties of gap junction proteins, including their structure and function. In the second part, we review the current evidence demonstrating the role of gap junction proteins in embryonic development with particular reference to those involved in embryonic heart development. Genetics and transgenic animal studies of gap junction protein function in embryonic heart development are considered and the alteration/disruption of gap junction intercellular communication which may lead to abnormal heart development is also discussed.
Bruehl, Stephen; Apkarian, A. Vania; Ballantyne, Jane C.; Berger, Ann; Borsook, David; Chen, Wen G.; Farrar, John T.; Haythornthwaite, Jennifer A.; Horn, Susan D.; Iadarola, Michael J.; Inturrisi, Charles E.; Lao, Lixing; Mackey, Sean; Mao, Jianren; Sawczuk, Andrea; Uhl, George R.; Witter, James; Woolf, Clifford J.; Zubieta, Jon-Kar; Lin, Yu
2013-01-01
Use of opioid analgesics for pain management has increased dramatically over the past decade, with corresponding increases in negative sequelae including overdose and death. There is currently no well-validated objective means of accurately identifying patients likely to experience good analgesia with low side effects and abuse risk prior to initiating opioid therapy. This paper discusses the concept of data-based personalized prescribing of opioid analgesics as a means to achieve this goal. Strengths, weaknesses, and potential synergism of traditional randomized placebo-controlled trial (RCT) and practice-based evidence (PBE) methodologies as means to acquire the clinical data necessary to develop validated personalized analgesic prescribing algorithms are overviewed. Several predictive factors that might be incorporated into such algorithms are briefly discussed, including genetic factors, differences in brain structure and function, differences in neurotransmitter pathways, and patient phenotypic variables such as negative affect, sex, and pain sensitivity. Currently available research is insufficient to inform development of quantitative analgesic prescribing algorithms. However, responder subtype analyses made practical by the large numbers of chronic pain patients in proposed collaborative PBE pain registries, in conjunction with follow-up validation RCTs, may eventually permit development of clinically useful analgesic prescribing algorithms. Perspective Current research is insufficient to base opioid analgesic prescribing on patient characteristics. Collaborative PBE studies in large, diverse pain patient samples in conjunction with follow-up RCTs may permit development of quantitative analgesic prescribing algorithms which could optimize opioid analgesic effectiveness, and mitigate risks of opioid-related abuse and mortality. PMID:23374939
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2016-10-01
We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.
Simulation of an enhanced TCAS 2 system in operation
NASA Technical Reports Server (NTRS)
Rojas, R. G.; Law, P.; Burnside, W. D.
1987-01-01
Described is a computer simulation of a Boeing 737 aircraft equipped with an enhanced Traffic and Collision Avoidance System (TCAS II). In particular, an algorithm is developed which permits the computer simulation of the tracking of a target airplane by a Boeing 373 which has a TCAS II array mounted on top of its fuselage. This algorithm has four main components: namely, the target path, the noise source, the alpha-beta filter, and threat detection. The implementation of each of these four components is described. Furthermore, the areas where the present algorithm needs to be improved are also mentioned.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
NASA Technical Reports Server (NTRS)
Shao, H. M.; Deutsch, L. J.; Reed, I. S.
1987-01-01
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
NASA Technical Reports Server (NTRS)
Shao, Howard M.; Reed, Irving S.
1988-01-01
A new very large scale integration (VLSI) design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous article is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area.
NASA Technical Reports Server (NTRS)
Ohri, A. K.; Wilson, T. G.; Owen, H. A., Jr.
1977-01-01
A procedure is presented for designing air-gapped energy-storage reactors for nine different dc-to-dc converters resulting from combinations of three single-winding power stages for voltage stepup, current stepup and voltage stepup/current stepup and three controllers with control laws that impose constant-frequency, constant transistor on-time and constant transistor off-time operation. The analysis, based on the energy-transfer requirement of the reactor, leads to a simple relationship for the required minimum volume of the air gap. Determination of this minimum air gap volume then permits the selection of either an air gap or a cross-sectional core area. Having picked one parameter, the minimum value of the other immediately leads to selection of the physical magnetic structure. Other analytically derived equations are used to obtain values for the required turns, the inductance, and the maximum rms winding current. The design procedure is applicable to a wide range of magnetic material characteristics and physical configurations for the air-gapped magnetic structure.
Three-dimensional unstructured grid generation via incremental insertion and local optimization
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Wiltberger, N. Lyn; Gandhi, Amar S.
1992-01-01
Algorithms for the generation of 3D unstructured surface and volume grids are discussed. These algorithms are based on incremental insertion and local optimization. The present algorithms are very general and permit local grid optimization based on various measures of grid quality. This is very important; unlike the 2D Delaunay triangulation, the 3D Delaunay triangulation appears not to have a lexicographic characterization of angularity. (The Delaunay triangulation is known to minimize that maximum containment sphere, but unfortunately this is not true lexicographically). Consequently, Delaunay triangulations in three-space can result in poorly shaped tetrahedral elements. Using the present algorithms, 3D meshes can be constructed which optimize a certain angle measure, albeit locally. We also discuss the combinatorial aspects of the algorithm as well as implementational details.
Delaunay based algorithm for finding polygonal voids in planar point sets
NASA Astrophysics Data System (ADS)
Alonso, R.; Ojeda, J.; Hitschfeld, N.; Hervías, C.; Campusano, L. E.
2018-01-01
This paper presents a new algorithm to find under-dense regions called voids inside a 2D point set. The algorithm starts from terminal-edges (local longest-edges) in a Delaunay triangulation and builds the largest possible low density terminal-edge regions around them. A terminal-edge region can represent either an entire void or part of a void (subvoid). Using artificial data sets, the case of voids that are detected as several adjacent subvoids is analyzed and four subvoid joining criteria are proposed and evaluated. Since this work is inspired on searches of a more robust, effective and efficient algorithm to find 3D cosmological voids the evaluation of the joining criteria considers this context. However, the design of the algorithm permits its adaption to the requirements of any similar application.
Stoykov, Nikolay S; Kuiken, Todd A; Lowery, Madeleine M; Taflove, Allen
2003-09-01
We present what we believe to be the first algorithms that use a simple scalar-potential formulation to model linear Debye and Lorentz dielectric dispersions at low frequencies in the context of finite-element time-domain (FETD) numerical solutions of electric potential. The new algorithms, which permit treatment of multiple-pole dielectric relaxations, are based on the auxiliary differential equation method and are unconditionally stable. We validate the algorithms by comparison with the results of a previously reported method based on the Fourier transform. The new algorithms should be useful in calculating the transient response of biological materials subject to impulsive excitation. Potential applications include FETD modeling of electromyography, functional electrical stimulation, defibrillation, and effects of lightning and impulsive electric shock.
Abort Gap Cleaning for LHC Run 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uythoven, Jan; Boccardi, Andrea; Bravin, Enrico
2014-07-01
To minimize the beam losses at the moment of an LHC beam dump the 3 μs long abort gap should contain as few particles as possible. Its population can be minimised by abort gap cleaning using the LHC transverse damper system. The LHC Run 1 experience is briefly recalled; changes foreseen for the LHC Run 2 are presented. They include improvements in the observation of the abort gap population and the mechanism to decide if cleaning is required, changes to the hardware of the transverse dampers to reduce the detrimental effect on the luminosity lifetime and proposed changes to themore » applied cleaning algorithms.« less
Comparison of Gap Elements and Contact Algorithm for 3D Contact Analysis of Spiral Bevel Gears
NASA Technical Reports Server (NTRS)
Bibel, G. D.; Tiku, K.; Kumar, A.; Handschuh, R.
1994-01-01
Three dimensional stress analysis of spiral bevel gears in mesh using the finite element method is presented. A finite element model is generated by solving equations that identify tooth surface coordinates. Contact is simulated by the automatic generation of nonpenetration constraints. This method is compared to a finite element contact analysis conducted with gap elements.
Photonic band gap in isotropic hyperuniform disordered solids with low dielectric contrast.
Man, Weining; Florescu, Marian; Matsuyama, Kazue; Yadak, Polin; Nahal, Geev; Hashemizad, Seyed; Williamson, Eric; Steinhardt, Paul; Torquato, Salvatore; Chaikin, Paul
2013-08-26
We report the first experimental demonstration of a TE-polarization photonic band gap (PBG) in a 2D isotropic hyperuniform disordered solid (HUDS) made of dielectric media with a dielectric index contrast of 1.6:1, very low for PBG formation. The solid is composed of a connected network of dielectric walls enclosing air-filled cells. Direct comparison with photonic crystals and quasicrystals permitted us to investigate band-gap properties as a function of increasing rotational isotropy. We present results from numerical simulations proving that the PBG observed experimentally for HUDS at low index contrast has zero density of states. The PBG is associated with the energy difference between complementary resonant modes above and below the gap, with the field predominantly concentrated in the air or in the dielectric. The intrinsic isotropy of HUDS may offer unprecedented flexibilities and freedom in applications (i. e. defect architecture design) not limited by crystalline symmetries.
Numerical simulation of supersonic gap flow.
Jing, Xu; Haiming, Huang; Guo, Huang; Song, Mo
2015-01-01
Various gaps in the surface of the supersonic aircraft have a significant effect on airflows. In order to predict the effects of attack angle, Mach number and width-to-depth ratio of gap on the local aerodynamic heating environment of supersonic flow, two-dimensional compressible Navier-Stokes equations are solved by the finite volume method, where convective flux of space term adopts the Roe format, and discretization of time term is achieved by 5-step Runge-Kutta algorithm. The numerical results reveal that the heat flux ratio is U-shaped distribution on the gap wall and maximum at the windward corner of the gap. The heat flux ratio decreases as the gap depth and Mach number increase, however, it increases as the attack angle increases. In addition, it is important to find that chamfer in the windward corner can effectively reduce gap effect coefficient. The study will be helpful for the design of the thermal protection system in reentry vehicles.
Adiabatic Quantum Search in Open Systems.
Wild, Dominik S; Gopalakrishnan, Sarang; Knap, Michael; Yao, Norman Y; Lukin, Mikhail D
2016-10-07
Adiabatic quantum algorithms represent a promising approach to universal quantum computation. In isolated systems, a key limitation to such algorithms is the presence of avoided level crossings, where gaps become extremely small. In open quantum systems, the fundamental robustness of adiabatic algorithms remains unresolved. Here, we study the dynamics near an avoided level crossing associated with the adiabatic quantum search algorithm, when the system is coupled to a generic environment. At zero temperature, we find that the algorithm remains scalable provided the noise spectral density of the environment decays sufficiently fast at low frequencies. By contrast, higher order scattering processes render the algorithm inefficient at any finite temperature regardless of the spectral density, implying that no quantum speedup can be achieved. Extensions and implications for other adiabatic quantum algorithms will be discussed.
Spatio-temporal regulation of connexin43 phosphorylation and gap junction dynamics.
Solan, Joell L; Lampe, Paul D
2018-01-01
Gap junctions are specialized membrane domains containing tens to thousands of intercellular channels. These channels permit exchange of small molecules (<1000Da) including ions, amino acids, nucleotides, metabolites and secondary messengers (e.g., calcium, glucose, cAMP, cGMP, IP 3 ) between cells. The common reductionist view of these structures is that they are composed entirely of integral membrane proteins encoded by the 21 member connexin human gene family. However, it is clear that the normal physiological function of this structure requires interaction and regulation by a variety of proteins, especially kinases. Phosphorylation is capable of directly modulating connexin channel function but the most dramatic effects on gap junction activity occur via the organization of the gap junction structures themselves. This is a direct result of the short half-life of the primary gap junction protein, connexin, which requires them to be constantly assembled, remodeled and turned over. The biological consequences of this remodeling are well illustrated during cardiac ischemia, a process wherein gap junctions are disassembled and remodeled resulting in arrhythmia and ultimately heart failure. This article is part of a Special Issue entitled: Gap Junction Proteins edited by Jean Claude Herve. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bouttier, Pierre-Antoine; Brankart, Jean-Michel; Candille, Guillem; Vidard, Arthur; Blayo, Eric; Verron, Jacques; Brasseur, Pierre
2015-04-01
In this project, the response of a variational data assimilation system based on NEMO and its linear tangent and adjoint model is investigated using a 4DVAR algorithm into a North-Atlantic model at eddy-permitting resolution. The assimilated data consist of Jason-2 and SARAL/AltiKA dataset collected during the 2013-2014 period. The main objective is to explore the robustness of the 4DVAR algorithm in the context of a realistic turbulent oceanic circulation at mid-latitude constrained by multi-satellite altimetry missions. This work relies on two previous studies. First, a study with similar objectives was performed based on academic double-gyre turbulent model and synthetic SARAL/AltiKA data, using the same DA experimental framework. Its main goal was to investigate the impact of turbulence on variational DA methods performance. The comparison with this previous work will bring to light the methodological and physical issues encountered by variational DA algorithms in a realistic context at similar, eddy-permitting spatial resolution. We also have demonstrated how a dataset mimicking future SWOT observations improves 4DVAR incremental performances at eddy-permitting resolution. Then, in the context of the OSTST and FP7 SANGOMA projects, an ensemble DA experiment based on the same model and observational datasets has been realized (see poster by Brasseur et al.). This work offers the opportunity to compare efficiency, pros and cons of both DA methods in the context of KA-band altimetric data, at spatial resolution commonly used today for research and operational applications. In this poster we will present the validation plan proposed to evaluate the skill of variational experiment vs. ensemble assimilation experiments covering the same period using independent observations (e.g. from Cryosat-2 mission).
The contour-buildup algorithm to calculate the analytical molecular surface.
Totrov, M; Abagyan, R
1996-01-01
A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.
A Genetic Algorithm Approach to Door Assignment in Breakbulk Terminals
DOT National Transportation Integrated Search
2001-08-23
Commercial vehicle regulation and enforcement is a necessary and important function of state governments. Through regulation, states promote highway safety, ensure that motor carriers have the proper licenses and operating permits, and collect taxes ...
ATR applications of minimax entropy models of texture and shape
NASA Astrophysics Data System (ADS)
Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.
2001-10-01
Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.
Byron, O
1997-01-01
Computer software such as HYDRO, based upon a comprehensive body of theoretical work, permits the hydrodynamic modeling of macromolecules in solution, which are represented to the computer interface as an assembly of spheres. The uniqueness of any satisfactory resultant model is optimized by incorporating into the modeling procedure the maximal possible number of criteria to which the bead model must conform. An algorithm (AtoB, for atoms to beads) that permits the direct construction of bead models from high resolution x-ray crystallographic or nuclear magnetic resonance data has now been formulated and tested. Models so generated then act as informed starting estimates for the subsequent iterative modeling procedure, thereby hastening the convergence to reasonable representations of solution conformation. Successful application of this algorithm to several proteins shows that predictions of hydrodynamic parameters, including those concerning solvation, can be confirmed. PMID:8994627
Robust evaluation of time series classification algorithms for structural health monitoring
NASA Astrophysics Data System (ADS)
Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.
2014-03-01
Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.
Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus
2015-01-01
Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.
Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus
2015-01-01
Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology. PMID:26441628
Designing broad phononic band gaps for in-plane modes
NASA Astrophysics Data System (ADS)
Li, Yang Fan; Meng, Fei; Li, Shuo; Jia, Baohua; Zhou, Shiwei; Huang, Xiaodong
2018-03-01
Phononic crystals are known as artificial materials that can manipulate the propagation of elastic waves, and one essential feature of phononic crystals is the existence of forbidden frequency range of traveling waves called band gaps. In this paper, we have proposed an easy way to design phononic crystals with large in-plane band gaps. We demonstrated that the gap between two arbitrarily appointed bands of in-plane mode can be formed by employing a certain number of solid or hollow circular rods embedded in a matrix material. Topology optimization has been applied to find the best material distributions within the primitive unit cell with maximal band gap width. Our results reveal that the centroids of optimized rods coincide with the point positions generated by Lloyd's algorithm, which deepens our understandings on the formation mechanism of phononic in-plane band gaps.
Programming languages and compiler design for realistic quantum hardware.
Chong, Frederic T; Franklin, Diana; Martonosi, Margaret
2017-09-13
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
gkmSVM: an R package for gapped-kmer SVM.
Ghandi, Mahmoud; Mohammad-Noori, Morteza; Ghareghani, Narges; Lee, Dongwon; Garraway, Levi; Beer, Michael A
2016-07-15
We present a new R package for training gapped-kmer SVM classifiers for DNA and protein sequences. We describe an improved algorithm for kernel matrix calculation that speeds run time by about 2 to 5-fold over our original gkmSVM algorithm. This package supports several sequence kernels, including: gkmSVM, kmer-SVM, mismatch kernel and wildcard kernel. gkmSVM package is freely available through the Comprehensive R Archive Network (CRAN), for Linux, Mac OS and Windows platforms. The C ++ implementation is available at www.beerlab.org/gkmsvm mghandi@gmail.com or mbeer@jhu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Programming languages and compiler design for realistic quantum hardware
NASA Astrophysics Data System (ADS)
Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret
2017-09-01
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
a New Graduation Algorithm for Color Balance of Remote Sensing Image
NASA Astrophysics Data System (ADS)
Zhou, G.; Liu, X.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Pan, Q.
2018-05-01
In order to expand the field of view to obtain more data and information when doing research on remote sensing image, workers always need to mosaicking images together. However, the image after mosaic always has the large color differences and produces the gap line. This paper based on the graduation algorithm of tarigonometric function proposed a new algorithm of Two Quarter-rounds Curves (TQC). The paper uses the Gaussian filter to solve the program about the image color noise and the gap line. The paper used one of Greenland compiled data acquired in 1963 from Declassified Intelligence Photography Project (DISP) by ARGON KH-5 satellite, and used the photography of North Gulf, China, by Landsat satellite to experiment. The experimental results show that the proposed method has improved the accuracy of the results in two parts: on the one hand, for the large color differences remote sensing image will become more balanced. On the other hands, the remote sensing image will achieve more smooth transition.
A basic analysis toolkit for biological sequences
Giancarlo, Raffaele; Siragusa, Alessandro; Siragusa, Enrico; Utro, Filippo
2007-01-01
This paper presents a software library, nicknamed BATS, for some basic sequence analysis tasks. Namely, local alignments, via approximate string matching, and global alignments, via longest common subsequence and alignments with affine and concave gap cost functions. Moreover, it also supports filtering operations to select strings from a set and establish their statistical significance, via z-score computation. None of the algorithms is new, but although they are generally regarded as fundamental for sequence analysis, they have not been implemented in a single and consistent software package, as we do here. Therefore, our main contribution is to fill this gap between algorithmic theory and practice by providing an extensible and easy to use software library that includes algorithms for the mentioned string matching and alignment problems. The library consists of C/C++ library functions as well as Perl library functions. It can be interfaced with Bioperl and can also be used as a stand-alone system with a GUI. The software is available at under the GNU GPL. PMID:17877802
VHP - An environment for the remote visualization of heuristic processes
NASA Technical Reports Server (NTRS)
Crawford, Stuart L.; Leiner, Barry M.
1991-01-01
A software system called VHP is introduced which permits the visualization of heuristic algorithms on both resident and remote hardware platforms. The VHP is based on the DCF tool for interprocess communication and is applicable to remote algorithms which can be on different types of hardware and in languages other than VHP. The VHP system is of particular interest to systems in which the visualization of remote processes is required such as robotics for telescience applications.
A partial entropic lattice Boltzmann MHD simulation of the Orszag-Tang vortex
NASA Astrophysics Data System (ADS)
Flint, Christopher; Vahala, George
2018-02-01
Karlin has introduced an analytically determined entropic lattice Boltzmann (LB) algorithm for Navier-Stokes turbulence. Here, this is partially extended to an LB model of magnetohydrodynamics, on using the vector distribution function approach of Dellar for the magnetic field (which is permitted to have field reversal). The partial entropic algorithm is benchmarked successfully against standard simulations of the Orszag-Tang vortex [Orszag, S.A.; Tang, C.M. J. Fluid Mech. 1979, 90 (1), 129-143].
Cascaded VLSI neural network architecture for on-line learning
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)
1992-01-01
High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.
Cascaded VLSI neural network architecture for on-line learning
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor); Daud, Taher (Inventor); Thakoor, Anilkumar P. (Inventor)
1995-01-01
High-speed, analog, fully-parallel and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware-compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A comparison-intensive feature classification application has been demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as application-specific-coprocessors for solving real-world problems at extremely high data rates.
Ortuño, Francisco M; Valenzuela, Olga; Rojas, Fernando; Pomares, Hector; Florido, Javier P; Urquiza, Jose M; Rojas, Ignacio
2013-09-01
Multiple sequence alignments (MSAs) are widely used approaches in bioinformatics to carry out other tasks such as structure predictions, biological function analyses or phylogenetic modeling. However, current tools usually provide partially optimal alignments, as each one is focused on specific biological features. Thus, the same set of sequences can produce different alignments, above all when sequences are less similar. Consequently, researchers and biologists do not agree about which is the most suitable way to evaluate MSAs. Recent evaluations tend to use more complex scores including further biological features. Among them, 3D structures are increasingly being used to evaluate alignments. Because structures are more conserved in proteins than sequences, scores with structural information are better suited to evaluate more distant relationships between sequences. The proposed multiobjective algorithm, based on the non-dominated sorting genetic algorithm, aims to jointly optimize three objectives: STRIKE score, non-gaps percentage and totally conserved columns. It was significantly assessed on the BAliBASE benchmark according to the Kruskal-Wallis test (P < 0.01). This algorithm also outperforms other aligners, such as ClustalW, Multiple Sequence Alignment Genetic Algorithm (MSA-GA), PRRP, DIALIGN, Hidden Markov Model Training (HMMT), Pattern-Induced Multi-sequence Alignment (PIMA), MULTIALIGN, Sequence Alignment Genetic Algorithm (SAGA), PILEUP, Rubber Band Technique Genetic Algorithm (RBT-GA) and Vertical Decomposition Genetic Algorithm (VDGA), according to the Wilcoxon signed-rank test (P < 0.05), whereas it shows results not significantly different to 3D-COFFEE (P > 0.05) with the advantage of being able to use less structures. Structural information is included within the objective function to evaluate more accurately the obtained alignments. The source code is available at http://www.ugr.es/~fortuno/MOSAStrE/MO-SAStrE.zip.
NOAA-NASA Coastal Zone Color Scanner reanalysis effort.
Gregg, Watson W; Conkright, Margarita E; O'Reilly, John E; Patt, Frederick S; Wang, Menghua H; Yoder, James A; Casey, Nancy W
2002-03-20
Satellite observations of global ocean chlorophyll span more than two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the National Oceanic and Atmospheric Administration and National Aeronautics and Space Administration (NOAA-NASA) CZCS reanalysis (NCR) effort. NCR consisted of (1) algorithm improvement (AI), where CZCS processing algorithms were improved with modernized atmospheric correction and bio-optical algorithms and (2) blending where in situ data were incorporated into the CZCS AI to minimize residual errors. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.
NASA Technical Reports Server (NTRS)
Fromm, Michael; Pitts, Michael; Alfred, Jerome
2000-01-01
This report summarizes the project team's activity and accomplishments during the period 12 February, 1999 - 12 February, 2000. The primary objective of this project was to create and test a generic algorithm for detecting polar stratospheric clouds (PSC), an algorithm that would permit creation of a unified, long term PSC database from a variety of solar occultation instruments that measure aerosol extinction near 1000 nm The second objective was to make a database of PSC observations and certain relevant related datasets. In this report we describe the algorithm, the data we are making available, and user access options. The remainder of this document provides the details of the algorithm and the database offering.
Automated method for measuring the extent of selective logging damage with airborne LiDAR data
NASA Astrophysics Data System (ADS)
Melendy, L.; Hagen, S. C.; Sullivan, F. B.; Pearson, T. R. H.; Walker, S. M.; Ellis, P.; Kustiyo; Sambodo, Ari Katmoko; Roswintiarti, O.; Hanson, M. A.; Klassen, A. W.; Palace, M. W.; Braswell, B. H.; Delgado, G. M.
2018-05-01
Selective logging has an impact on the global carbon cycle, as well as on the forest micro-climate, and longer-term changes in erosion, soil and nutrient cycling, and fire susceptibility. Our ability to quantify these impacts is dependent on methods and tools that accurately identify the extent and features of logging activity. LiDAR-based measurements of these features offers significant promise. Here, we present a set of algorithms for automated detection and mapping of critical features associated with logging - roads/decks, skid trails, and gaps - using commercial airborne LiDAR data as input. The automated algorithm was applied to commercial LiDAR data collected over two logging concessions in Kalimantan, Indonesia in 2014. The algorithm results were compared to measurements of the logging features collected in the field soon after logging was complete. The automated algorithm-mapped road/deck and skid trail features match closely with features measured in the field, with agreement levels ranging from 69% to 99% when adjusting for GPS location error. The algorithm performed most poorly with gaps, which, by their nature, are variable due to the unpredictable impact of tree fall versus the linear and regular features directly created by mechanical means. Overall, the automated algorithm performs well and offers significant promise as a generalizable tool useful to efficiently and accurately capture the effects of selective logging, including the potential to distinguish reduced impact logging from conventional logging.
NASA Astrophysics Data System (ADS)
Shimojo, Fuyuki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2008-02-01
A linear-scaling algorithm based on a divide-and-conquer (DC) scheme has been designed to perform large-scale molecular-dynamics (MD) simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). Electronic wave functions are represented on a real-space grid, which is augmented with a coarse multigrid to accelerate the convergence of iterative solutions and with adaptive fine grids around atoms to accurately calculate ionic pseudopotentials. Spatial decomposition is employed to implement the hierarchical-grid DC-DFT algorithm on massively parallel computers. The largest benchmark tests include 11.8×106 -atom ( 1.04×1012 electronic degrees of freedom) calculation on 131 072 IBM BlueGene/L processors. The DC-DFT algorithm has well-defined parameters to control the data locality, with which the solutions converge rapidly. Also, the total energy is well conserved during the MD simulation. We perform first-principles MD simulations based on the DC-DFT algorithm, in which large system sizes bring in excellent agreement with x-ray scattering measurements for the pair-distribution function of liquid Rb and allow the description of low-frequency vibrational modes of graphene. The band gap of a CdSe nanorod calculated by the DC-DFT algorithm agrees well with the available conventional DFT results. With the DC-DFT algorithm, the band gap is calculated for larger system sizes until the result reaches the asymptotic value.
Collins, Karen; Reed, Malcolm; Lifford, Kate; Burton, Maria; Edwards, Adrian; Ring, Alistair; Brain, Katherine; Harder, Helena; Robinson, Thompson; Cheung, Kwok Leung; Morgan, Jenna; Audisio, Riccardo; Ward, Susan; Richards, Paul; Martin, Charlene; Chater, Tim; Pemberton, Kirsty; Nettleship, Anthony; Murray, Christopher; Walters, Stephen; Bortolami, Oscar; Armitage, Fiona; Leonard, Robert; Gath, Jacqui; Revell, Deirdre; Green, Tracy; Wyld, Lynda
2017-07-31
While breast cancer outcomes are improving steadily in younger women due to advances in screening and improved therapies, there has been little change in outcomes among the older age group. It is inevitable that comorbidities/frailty rates are higher, which may increase the risks of some breast cancer treatments such as surgery and chemotherapy, many older women are healthy and may benefit from their use. Adjusting treatment regimens appropriately for age/comorbidity/frailty is variable and largely non-evidence based, specifically with regard to rates of surgery for operable oestrogen receptor-positive disease and rates of chemotherapy for high-risk disease. This multicentre, parallel group, pragmatic cluster randomised controlled trial (RCT) (2015-18) reported here is nested within a larger ongoing 'Age Gap Cohort Study' (2012-18RP-PG-1209-10071), aims to evaluate the effectiveness of a complex intervention of decision support interventions to assist in the treatment decision making for early breast cancer in older women. The interventions include two patient decision aids (primary endocrine therapy vs surgery/antioestrogen therapy and chemotherapy vs no chemotherapy) and a clinical treatment outcomes algorithm for clinicians. National and local ethics committee approval was obtained for all UK participating sites. Results from the trial will be submitted for publication in international peer-reviewed scientific journals. 115550. European Union Drug Regulating Authorities Clinical Trials (EudraCT) number 2015-004220-61;Pre-results. Sponsor's Protocol Code Number Sheffield Teaching Hospitals STH17086. ISRCTN 32447*. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Bera, Debajyoti
2015-06-01
One of the early achievements of quantum computing was demonstrated by Deutsch and Jozsa (Proc R Soc Lond A Math Phys Sci 439(1907):553, 1992) regarding classification of a particular type of Boolean functions. Their solution demonstrated an exponential speedup compared to classical approaches to the same problem; however, their solution was the only known quantum algorithm for that specific problem so far. This paper demonstrates another quantum algorithm for the same problem, with the same exponential advantage compared to classical algorithms. The novelty of this algorithm is the use of quantum amplitude amplification, a technique that is the key component of another celebrated quantum algorithm developed by Grover (Proceedings of the twenty-eighth annual ACM symposium on theory of computing, ACM Press, New York, 1996). A lower bound for randomized (classical) algorithms is also presented which establishes a sound gap between the effectiveness of our quantum algorithm and that of any randomized algorithm with similar efficiency.
Deriving health utilities from the MacNew Heart Disease Quality of Life Questionnaire.
Chen, Gang; McKie, John; Khan, Munir A; Richardson, Jeff R
2015-10-01
Quality of life is included in the economic evaluation of health services by measuring the preference for health states, i.e. health state utilities. However, most intervention studies include a disease-specific, not a utility, instrument. Consequently, there has been increasing use of statistical mapping algorithms which permit utilities to be estimated from a disease-specific instrument. The present paper provides such algorithms between the MacNew Heart Disease Quality of Life Questionnaire (MacNew) instrument and six multi-attribute utility (MAU) instruments, the Euroqol (EQ-5D), the Short Form 6D (SF-6D), the Health Utilities Index (HUI) 3, the Quality of Wellbeing (QWB), the 15D (15 Dimension) and the Assessment of Quality of Life (AQoL-8D). Heart disease patients and members of the healthy public were recruited from six countries. Non-parametric rank tests were used to compare subgroup utilities and MacNew scores. Mapping algorithms were estimated using three separate statistical techniques. Mapping algorithms achieved a high degree of precision. Based on the mean absolute error and the intra class correlation the preferred mapping is MacNew into SF-6D or 15D. Using the R squared statistic the preferred mapping is MacNew into AQoL-8D. The algorithms reported in this paper enable MacNew data to be mapped into utilities predicted from any of six instruments. This permits studies which have included the MacNew to be used in cost utility analyses which, in turn, allows the comparison of services with interventions across the health system. © The European Society of Cardiology 2014.
A single chip VLSI Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Hsu, I. S.; Deutsch, L. J.; Reed, I. S.
1986-01-01
A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous design is replaced by a time domain algorithm. A new architecture that implements such an algorithm permits efficient pipeline processing with minimum circuitry. A systolic array is also developed to perform erasure corrections in the new design. A modified form of Euclid's algorithm is implemented by a new architecture that maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and a significant reduction in silicon area, therefore making it possible to build a pipeline (31,15)RS decoder on a single VLSI chip.
Toward a computational psycholinguistics of reference production.
van Deemter, Kees; Gatt, Albert; van Gompel, Roger P G; Krahmer, Emiel
2012-04-01
This article introduces the topic ''Production of Referring Expressions: Bridging the Gap between Computational and Empirical Approaches to Reference'' of the journal Topics in Cognitive Science. We argue that computational and psycholinguistic approaches to reference production can benefit from closer interaction, and that this is likely to result in the construction of algorithms that differ markedly from the ones currently known in the computational literature. We focus particularly on determinism, the feature of existing algorithms that is perhaps most clearly at odds with psycholinguistic results, discussing how future algorithms might include non-determinism, and how new psycholinguistic experiments could inform the development of such algorithms. Copyright © 2012 Cognitive Science Society, Inc.
Matthews, R; Turner, P J; McDonald, N J; Ermolaev, K; Manus, T; Shelby, R A; Steindorf, M
2008-01-01
This paper describes a compact, lightweight and ultra-low power ambulatory wireless EEG system based upon QUASAR's innovative noninvasive bioelectric sensor technologies. The sensors operate through hair without skin preparation or conductive gels. Mechanical isolation built into the harness permits the recording of high quality EEG data during ambulation. Advanced algorithms developed for this system permit real time classification of workload during subject motion. Measurements made using the EEG system during ambulation are presented, including results for real time classification of subject workload.
Knowledge requirements for automated inference of medical textbook markup.
Berrios, D. C.; Kehler, A.; Fagan, L. M.
1999-01-01
Indexing medical text in journals or textbooks requires a tremendous amount of resources. We tested two algorithms for automatically indexing nouns, noun-modifiers, and noun phrases, and inferring selected binary relations between UMLS concepts in a textbook of infectious disease. Sixty-six percent of nouns and noun-modifiers and 81% of noun phrases were correctly matched to UMLS concepts. Semantic relations were identified with 100% specificity and 94% sensitivity. For some medical sub-domains, these algorithms could permit expeditious generation of more complex indexing. PMID:10566445
1978-12-01
Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution
A High-Level Language for Modeling Algorithms and Their Properties
NASA Astrophysics Data System (ADS)
Akhtar, Sabina; Merz, Stephan; Quinson, Martin
Designers of concurrent and distributed algorithms usually express them using pseudo-code. In contrast, most verification techniques are based on more mathematically-oriented formalisms such as state transition systems. This conceptual gap contributes to hinder the use of formal verification techniques. Leslie Lamport introduced PlusCal, a high-level algorithmic language that has the "look and feel" of pseudo-code, but is equipped with a precise semantics and includes a high-level expression language based on set theory. PlusCal models can be compiled to TLA + and verified using the model checker tlc.
Strong motion seismology in Mexico
NASA Astrophysics Data System (ADS)
Singh, S. K.; Ordaz, M.
1993-02-01
Since 1985, digital accelerographs have been installed along a 500 km segment above the Mexican subduction zone, at some inland sites which form an attenuation line between the Guerrero seismic gap and Mexico City, and in the Valley of Mexico. These networks have recorded a few large earthquakes and many moderate and small earthquakes. Analysis of the data has permitted a significant advance in the understanding of source characteristics, wave propagation and attenuation, and site effects. This, in turn, has permitted reliable estimations of ground motions from future earthquakes. This paper presents a brief summary of some important results which are having a direct bearing on current earthquake engineering practice in Mexico.
Gap junctions in cells of the immune system: structure, regulation and possible functional roles.
Sáez, J C; Brañes, M C; Corvalán, L A; Eugenín, E A; González, H; Martínez, A D; Palisson, F
2000-04-01
Gap junction channels are sites of cytoplasmic communication between contacting cells. In vertebrates, they consist of protein subunits denoted connexins (Cxs) which are encoded by a gene family. According to their Cx composition, gap junction channels show different gating and permeability properties that define which ions and small molecules permeate them. Differences in Cx primary sequences suggest that channels composed of different Cxs are regulated differentially by intracellular pathways under specific physiological conditions. Functional roles of gap junction channels could be defined by the relative importance of permeant substances, resulting in coordination of electrical and/or metabolic cellular responses. Cells of the native and specific immune systems establish transient homo- and heterocellular contacts at various steps of the immune response. Morphological and functional studies reported during the last three decades have revealed that many intercellular contacts between cells in the immune response present gap junctions or "gap junction-like" structures. Partial characterization of the molecular composition of some of these plasma membrane structures and regulatory mechanisms that control them have been published recently. Studies designed to elucidate their physiological roles suggest that they might permit coordination of cellular events which favor the effective and timely response of the immune system.
Decision tree and ensemble learning algorithms with their applications in bioinformatics.
Che, Dongsheng; Liu, Qi; Rasheed, Khaled; Tao, Xiuping
2011-01-01
Machine learning approaches have wide applications in bioinformatics, and decision tree is one of the successful approaches applied in this field. In this chapter, we briefly review decision tree and related ensemble algorithms and show the successful applications of such approaches on solving biological problems. We hope that by learning the algorithms of decision trees and ensemble classifiers, biologists can get the basic ideas of how machine learning algorithms work. On the other hand, by being exposed to the applications of decision trees and ensemble algorithms in bioinformatics, computer scientists can get better ideas of which bioinformatics topics they may work on in their future research directions. We aim to provide a platform to bridge the gap between biologists and computer scientists.
Numerical Simulation of Supersonic Gap Flow
Jing, Xu; Haiming, Huang; Guo, Huang; Song, Mo
2015-01-01
Various gaps in the surface of the supersonic aircraft have a significant effect on airflows. In order to predict the effects of attack angle, Mach number and width-to-depth ratio of gap on the local aerodynamic heating environment of supersonic flow, two-dimensional compressible Navier-Stokes equations are solved by the finite volume method, where convective flux of space term adopts the Roe format, and discretization of time term is achieved by 5-step Runge-Kutta algorithm. The numerical results reveal that the heat flux ratio is U-shaped distribution on the gap wall and maximum at the windward corner of the gap. The heat flux ratio decreases as the gap depth and Mach number increase, however, it increases as the attack angle increases. In addition, it is important to find that chamfer in the windward corner can effectively reduce gap effect coefficient. The study will be helpful for the design of the thermal protection system in reentry vehicles. PMID:25635395
The properties of optimal two-dimensional phononic crystals with different material contrasts
NASA Astrophysics Data System (ADS)
Liu, Zong-Fa; Wu, Bin; He, Cun-Fu
2016-09-01
By modifying the spatial distribution of constituent material phases, phononic crystals (PnCs) can be designed to exhibit band gaps within which sound and vibration cannot propagate. In this paper, the developed topology optimization method (TOM), based on genetic algorithms (GAs) and the finite element method (FEM), is proposed to design two-dimensional (2D) solid PnC structures composed of two contrasting elastic materials. The PnCs have the lowest order band gap that is the third band gap for the coupled mode, the first band gap for the shear mode or the XY 34 Z band gap for the mixed mode. Moreover, the effects of the ratios of contrasting material properties on the optimal layout of unit cells and the corresponding phononic band gaps (PBGs) are investigated. The results indicate that the topology of the optimal PnCs and corresponding band gaps varies with the change of material contrasts. The law can be used for the rapid design of desired PnC structures.
Walther, Dirk; Bartha, Gábor; Morris, Macdonald
2001-01-01
A pivotal step in electrophoresis sequencing is the conversion of the raw, continuous chromatogram data into the actual sequence of discrete nucleotides, a process referred to as basecalling. We describe a novel algorithm for basecalling implemented in the program LifeTrace. Like Phred, currently the most widely used basecalling software program, LifeTrace takes processed trace data as input. It was designed to be tolerant to variable peak spacing by means of an improved peak-detection algorithm that emphasizes local chromatogram information over global properties. LifeTrace is shown to generate high-quality basecalls and reliable quality scores. It proved particularly effective when applied to MegaBACE capillary sequencing machines. In a benchmark test of 8372 dye-primer MegaBACE chromatograms, LifeTrace generated 17% fewer substitution errors, 16% fewer insertion/deletion errors, and 2.4% more aligned bases to the finished sequence than did Phred. For two sets totaling 6624 dye-terminator chromatograms, the performance improvement was 15% fewer substitution errors, 10% fewer insertion/deletion errors, and 2.1% more aligned bases. The processing time required by LifeTrace is comparable to that of Phred. The predicted quality scores were in line with observed quality scores, permitting direct use for quality clipping and in silico single nucleotide polymorphism (SNP) detection. Furthermore, we introduce a new type of quality score associated with every basecall: the gap-quality. It estimates the probability of a deletion error between the current and the following basecall. This additional quality score improves detection of single basepair deletions when used for locating potential basecalling errors during the alignment. We also describe a new protocol for benchmarking that we believe better discerns basecaller performance differences than methods previously published. PMID:11337481
A Scheduling Algorithm for Replicated Real-Time Tasks
NASA Technical Reports Server (NTRS)
Yu, Albert C.; Lin, Kwei-Jay
1991-01-01
We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.
Robert, Jean-Luc; Erkamp, Ramon; Korukonda, Sanghamithra; Vignon, François; Radulescu, Emil
2015-11-01
In ultrasound imaging, an array of elements is used to image a medium. If part of the array is blocked by an obstacle, or if the array is made from several sub-arrays separated by a gap, grating lobes appear and the image is degraded. The grating lobes are caused by missing spatial frequencies, corresponding to the blocked or non-existing elements. However, in an active imaging system, where elements are used both for transmitting and receiving, the round trip signal is redundant: different pairs of transmit and receive elements carry similar information. It is shown here that, if the gaps are smaller than the active sub-apertures, this redundancy can be used to compensate for the missing signals and recover full resolution. Three algorithms are proposed: one is based on a synthetic aperture method, a second one uses dual-apodization beamforming, and the third one is a radio frequency (RF) data based deconvolution. The algorithms are evaluated on simulated and experimental data sets. An application could be imaging through ribs with a large aperture.
Recursive optimal pruning with applications to tree structured vector quantizers
NASA Technical Reports Server (NTRS)
Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen
1992-01-01
A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.
Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity
Louis, S.J.; Raines, G.L.
2003-01-01
We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.
Automated detection of sperm whale sounds as a function of abrupt changes in sound intensity
NASA Astrophysics Data System (ADS)
Walker, Christopher D.; Rayborn, Grayson H.; Brack, Benjamin A.; Kuczaj, Stan A.; Paulos, Robin L.
2003-04-01
An algorithm designed to detect abrupt changes in sound intensity was developed and used to identify and count sperm whale vocalizations and to measure boat noise. The algorithm is a MATLAB routine that counts the number of occurrences for which the change in intensity level exceeds a threshold. The algorithm also permits the setting of a ``dead time'' interval to prevent the counting of multiple pulses within a single sperm whale click. This algorithm was used to analyze digitally sampled recordings of ambient noise obtained from the Gulf of Mexico using near bottom mounted EARS buoys deployed as part of the Littoral Acoustic Demonstration Center experiment. Because the background in these data varied slowly, the result of the application of the algorithm was automated detection of sperm whale clicks and creaks with results that agreed well with those obtained by trained human listeners. [Research supported by ONR.
Clustering for Binary Data Sets by Using Genetic Algorithm-Incremental K-means
NASA Astrophysics Data System (ADS)
Saharan, S.; Baragona, R.; Nor, M. E.; Salleh, R. M.; Asrah, N. M.
2018-04-01
This research was initially driven by the lack of clustering algorithms that specifically focus in binary data. To overcome this gap in knowledge, a promising technique for analysing this type of data became the main subject in this research, namely Genetic Algorithms (GA). For the purpose of this research, GA was combined with the Incremental K-means (IKM) algorithm to cluster the binary data streams. In GAIKM, the objective function was based on a few sufficient statistics that may be easily and quickly calculated on binary numbers. The implementation of IKM will give an advantage in terms of fast convergence. The results show that GAIKM is an efficient and effective new clustering algorithm compared to the clustering algorithms and to the IKM itself. In conclusion, the GAIKM outperformed other clustering algorithms such as GCUK, IKM, Scalable K-means (SKM) and K-means clustering and paves the way for future research involving missing data and outliers.
Bi-directional evolutionary optimization for photonic band gap structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Fei; School of Civil Engineering, Central South University, Changsha 410075; Huang, Xiaodong, E-mail: huang.xiaodong@rmit.edu.au
2015-12-01
Toward an efficient and easy-implement optimization for photonic band gap structures, this paper extends the bi-directional evolutionary structural optimization (BESO) method for maximizing photonic band gaps. Photonic crystals are assumed to be periodically composed of two dielectric materials with the different permittivity. Based on the finite element analysis and sensitivity analysis, BESO starts from a simple initial design without any band gap and gradually re-distributes dielectric materials within the unit cell so that the resulting photonic crystal possesses a maximum band gap between two specified adjacent bands. Numerical examples demonstrated the proposed optimization algorithm can successfully obtain the band gapsmore » from the first to the tenth band for both transverse magnetic and electric polarizations. Some optimized photonic crystals exhibit novel patterns markedly different from traditional designs of photonic crystals.« less
Distribution Characteristics of Air-Bone Gaps – Evidence of Bias in Manual Audiometry
Margolis, Robert H.; Wilson, Richard H.; Popelka, Gerald R.; Eikelboom, Robert H.; Swanepoel, De Wet; Saly, George L.
2015-01-01
Objective Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps. PMID:26627469
Nuclear reactor removable radial shielding assembly having a self-bowing feature
Pennell, William E.; Kalinowski, Joseph E.; Waldby, Robert N.; Rylatt, John A.; Swenson, Daniel V.
1978-01-01
A removable radial shielding assembly for use in the periphery of the core of a liquid-metal-cooled fast-breeder reactor, for closing interassembly gaps in the reactor core assembly load plane prior to reactor criticality and power operation to prevent positive reactivity insertion. The assembly has a lower nozzle portion for inserting into the core support and a flexible heat-sensitive bimetallic central spine surrounded by blocks of shielding material. At refueling temperature and below the spine is relaxed and in a vertical position so that the tolerances permitted by the interassembly gaps allow removal and replacement of the various reactor core assemblies. During an increase in reactor temperature from refueling to hot standby, the bimetallic spine expands, bowing the assembly toward the core center line, exerting a radially inward gap-closing-force on the above core load plane of the reactor core assembly, closing load plane interassembly gaps throughout the core prior to startup and preventing positive reactivity insertion.
An algorithm for extraction of periodic signals from sparse, irregularly sampled data
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1994-01-01
Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Joseph Z., E-mail: x@anl.gov; Vasserman, Isaac; Strelnikov, Nikita
2016-07-27
A 2.8-meter long horizontal field prototype undulator with a dynamic force compensation mechanism has been developed and tested at the Advanced Photon Source (APS) at Argonne National Laboratory (Argonne). The magnetic tuning of the undulator integrals has been automated and accomplished by applying magnetic shims. A detailed description of the algorithms and performance is reported.
Closing the Certification Gaps in Adaptive Flight Control Software
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
2008-01-01
Over the last five decades, extensive research has been performed to design and develop adaptive control systems for aerospace systems and other applications where the capability to change controller behavior at different operating conditions is highly desirable. Although adaptive flight control has been partially implemented through the use of gain-scheduled control, truly adaptive control systems using learning algorithms and on-line system identification methods have not seen commercial deployment. The reason is that the certification process for adaptive flight control software for use in national air space has not yet been decided. The purpose of this paper is to examine the gaps between the state-of-the-art methodologies used to certify conventional (i.e., non-adaptive) flight control system software and what will likely to be needed to satisfy FAA airworthiness requirements. These gaps include the lack of a certification plan or process guide, the need to develop verification and validation tools and methodologies to analyze adaptive controller stability and convergence, as well as the development of metrics to evaluate adaptive controller performance at off-nominal flight conditions. This paper presents the major certification gap areas, a description of the current state of the verification methodologies, and what further research efforts will likely be needed to close the gaps remaining in current certification practices. It is envisioned that closing the gap will require certain advances in simulation methods, comprehensive methods to determine learning algorithm stability and convergence rates, the development of performance metrics for adaptive controllers, the application of formal software assurance methods, the application of on-line software monitoring tools for adaptive controller health assessment, and the development of a certification case for adaptive system safety of flight.
Center for Quantum Algorithms and Complexity
2014-05-12
precisely, it asserts that for any subset L of particles, the entanglement entropy between L and L̄ is bounded by the surface area of L (the area is...ground states of gapped local Hamiltonians. Roughly, it says that the entanglement in such states is very local, and the entanglement entropy scales...the theorem states that the entanglement entropy is bounded by exp(X), where X = log(d/?). Hastingss result implies that ground states of gapped 1D
Urbanization in Thailand. An International Urbanization Survey Report to the Ford Foundation.
ERIC Educational Resources Information Center
Romm, Jeff
The primary intentions of this report are to describe urbanization in Thailand to the extent that available information permits, to relate it to development and development planning, and to identify gaps in current knowledge that are likely to become significant in the formulation of future policies and programs. The first section,…
Striped Electrodes for Solid-Electrolyte Cells
NASA Technical Reports Server (NTRS)
Richter, R.
1983-01-01
Striped thick-film platinum electrodes help insure lower overall cell resistance by permitting free flow of gases in gaps between stripes. Thickfilm stripes are also easier to fabricate than porous thin-film electrodes that cover entire surface. Possible applications for improved cells include oxygen production from carbon dioxide, extraction of oxygen from air, small fluidic pumping, sewage treatment, and fuel cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, T.; Zimoch, D.
The operation of an APPLE II based undulator beamline with all its polarization states (linear horizontal and vertical, circular and elliptical, and continous variation of the linear vector) requires an effective description allowing an automated calculation of gap and shift parameter as function of energy and operation mode. The extension of the linear polarization range from 0 to 180 deg. requires 4 shiftable magnet arrrays, permitting use of the APU (adjustable phase undulator) concept. Studies for a pure fixed gap APPLE II for the SLS revealed surprising symmetries between circular and linear polarization modes allowing for simplified operation. A semi-analyticalmore » model covering all types of APPLE II and its implementation will be presented.« less
NASA Astrophysics Data System (ADS)
Schmidt, T.; Zimoch, D.
2007-01-01
The operation of an APPLE II based undulator beamline with all its polarization states (linear horizontal and vertical, circular and elliptical, and continous variation of the linear vector) requires an effective description allowing an automated calculation of gap and shift parameter as function of energy and operation mode. The extension of the linear polarization range from 0 to 180° requires 4 shiftable magnet arrrays, permitting use of the APU (adjustable phase undulator) concept. Studies for a pure fixed gap APPLE II for the SLS revealed surprising symmetries between circular and linear polarization modes allowing for simplified operation. A semi-analytical model covering all types of APPLE II and its implementation will be presented.
Crozier, G K D; Hajzler, Christopher
2010-06-01
The concept of "market stimulus"--the idea that free markets can play a role in widening access to new technologies--may help support the view that parents should be permitted to purchase germ-line enhancements. However, a critical examination of the topic shows that market stimulus, even if it applies to human genomic interventions, does not provide sufficient reason for deregulating germ-line enhancements because: (1) it could widen the gap between the rich and the poor; (2) even if it does not widen the gap, it might not sufficiently benefit the poor; and (3) it could have harmful effects for future generations.
Rowlands, J A; Hunter, D M; Araj, N
1991-01-01
A new digital image readout method for electrostatic charge images on photoconductive plates is described. The method can be used to read out images on selenium plates similar to those used in xeromammography. The readout method, called the air-gap photoinduced discharge method (PID), discharges the latent image pixel by pixel and measures the charge. The PID readout method, like electrometer methods, is linear. However, the PID method permits much better resolution than scanning electrometers while maintaining quantum limited performance at high radiation exposure levels. Thus the air-gap PID method appears to be uniquely superior for high-resolution digital imaging tasks such as mammography.
Computer-assisted virtual autopsy using surgical navigation techniques.
Ebert, Lars Christian; Ruder, Thomas D; Martinez, Rosa Maria; Flach, Patricia M; Schweitzer, Wolf; Thali, Michael J; Ampanozi, Garyfalia
2015-01-01
OBJECTIVE; Virtual autopsy methods, such as postmortem CT and MRI, are increasingly being used in forensic medicine. Forensic investigators with little to no training in diagnostic radiology and medical laypeople such as state's attorneys often find it difficult to understand the anatomic orientation of axial postmortem CT images. We present a computer-assisted system that permits postmortem CT datasets to be quickly and intuitively resliced in real time at the body to narrow the gap between radiologic imaging and autopsy. Our system is a potentially valuable tool for planning autopsies, showing findings to medical laypeople, and teaching CT anatomy, thus further closing the gap between radiology and forensic pathology.
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-06-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-01-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
On the VLSI design of a pipeline Reed-Solomon decoder using systolic arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, H.M.; Reed, I.S.
A new VLSI design of a pipeline Reed-Solomon decoder is presented. The transform decoding technique used in a previous paper is replaced by a time domain algorithm through a detailed comparison of their VLSI implementations. A new architecture that implements the time domain algorithm permits efficient pipeline processing with reduced circuitry. Erasure correction capability is also incorporated with little additional complexity. By using a multiplexing technique, a new implementation of Euclid's algorithm maintains the throughput rate with less circuitry. Such improvements result in both enhanced capability and significant reduction in silicon area, therefore making it possible to build a pipelinemore » Reed-Solomon decoder on a single VLSI chip.« less
A generalized method for multiple robotic manipulator programming applied to vertical-up welding
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth R.; Cook, George E.; Andersen, Kristinn; Barnett, Robert Joel; Zein-Sabattou, Saleh
1991-01-01
The application is described of a weld programming algorithm for vertical-up welding, which is frequently desired for variable polarity plasma arc welding (VPPAW). The Basic algorithm performs three tasks simultaneously: control of the robotic mechanism so that proper torch motion is achieved while minimizing the sum-of-squares of joint displacement; control of the torch while the part is maintained in a desirable orientation; and control of the wire feed mechanism location with respect to the moving welding torch. Also presented is a modification of this algorithm which permits it to be used for vertical-up welding. The details of this modification are discussed and simulation examples are provided for illustration and verification.
An Algorithm for Building an Electronic Database.
Cohen, Wess A; Gayle, Lloyd B; Patel, Nima P
2016-01-01
We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperative details via standardized drop-down menus. A printed out form from the Microsoft Access template was given to each surgeon to be completed after each case and a member of the health care team then entered the case information into the database. By utilizing straightforward, HIPAA-compliant data input fields, we permitted data collection and transcription to be easy and efficient. Collecting a wide variety of data allowed us the freedom to evolve our clinical interests, while the platform also permitted new categories to be added at will. We have proposed a reproducible method for institutions to create a database, which will then allow senior and junior surgeons to analyze their outcomes and compare them with others in an effort to improve patient care and outcomes. This is a cost-efficient way to create and maintain a database without additional software.
Multi-linear model set design based on the nonlinearity measure and H-gap metric.
Shaghaghi, Davood; Fatehi, Alireza; Khaki-Sedigh, Ali
2017-05-01
This paper proposes a model bank selection method for a large class of nonlinear systems with wide operating ranges. In particular, nonlinearity measure and H-gap metric are used to provide an effective algorithm to design a model bank for the system. Then, the proposed model bank is accompanied with model predictive controllers to design a high performance advanced process controller. The advantage of this method is the reduction of excessive switch between models and also decrement of the computational complexity in the controller bank that can lead to performance improvement of the control system. The effectiveness of the method is verified by simulations as well as experimental studies on a pH neutralization laboratory apparatus which confirms the efficiency of the proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki
The standard computer-tomography-based method for measuring emphysema uses percentage of area of low attenuation which is called the pixel index (PI). However, the PI method is susceptible to the problem of averaging effect and this causes the discrepancy between what the PI method describes and what radiologists observe. Knowing that visual recognition of the different types of regional radiographic emphysematous tissues in a CT image can be fuzzy, this paper proposes a low-attenuation gap length matrix (LAGLM) based algorithm for classifying the regional radiographic lung tissues into four emphysema types distinguishing, in particular, radiographic patterns that imply obvious or subtle bullous emphysema from those that imply diffuse emphysema or minor destruction of airway walls. Neural network is used for discrimination. The proposed LAGLM method is inspired by, but different from, former texture-based methods like gray level run length matrix (GLRLM) and gray level gap length matrix (GLGLM). The proposed algorithm is successfully validated by classifying 105 lung regions that are randomly selected from 270 images. The lung regions are hand-annotated by radiologists beforehand. The average four-class classification accuracies in the form of the proposed algorithm/PI/GLRLM/GLGLM methods are: 89.00%/82.97%/52.90%/51.36%, respectively. The p-values from the correlation analyses between the classification results of 270 images and pulmonary function test results are generally less than 0.01. The classification results are useful for a followup study especially for monitoring morphological changes with progression of pulmonary disease.
Gapped Spectral Dictionaries and Their Applications for Database Searches of Tandem Mass Spectra*
Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno; Pevzner, Pavel A.
2011-01-01
Generating all plausible de novo interpretations of a peptide tandem mass (MS/MS) spectrum (Spectral Dictionary) and quickly matching them against the database represent a recently emerged alternative approach to peptide identification. However, the sizes of the Spectral Dictionaries quickly grow with the peptide length making their generation impractical for long peptides. We introduce Gapped Spectral Dictionaries (all plausible de novo interpretations with gaps) that can be easily generated for any peptide length thus addressing the limitation of the Spectral Dictionary approach. We show that Gapped Spectral Dictionaries are small thus opening a possibility of using them to speed-up MS/MS searches. Our MS-GappedDictionary algorithm (based on Gapped Spectral Dictionaries) enables proteogenomics applications (such as searches in the six-frame translation of the human genome) that are prohibitively time consuming with existing approaches. MS-GappedDictionary generates gapped peptides that occupy a niche between accurate but short peptide sequence tags and long but inaccurate full length peptide reconstructions. We show that, contrary to conventional wisdom, some high-quality spectra do not have good peptide sequence tags and introduce gapped tags that have advantages over the conventional peptide sequence tags in MS/MS database searches. PMID:21444829
Starich, Todd A.; Hall, David H.; Greenstein, David
2014-01-01
In all animals examined, somatic cells of the gonad control multiple biological processes essential for germline development. Gap junction channels, composed of connexins in vertebrates and innexins in invertebrates, permit direct intercellular communication between cells and frequently form between somatic gonadal cells and germ cells. Gap junctions comprise hexameric hemichannels in apposing cells that dock to form channels for the exchange of small molecules. Here we report essential roles for two classes of gap junction channels, composed of five innexin proteins, in supporting the proliferation of germline stem cells and gametogenesis in the nematode Caenorhabditis elegans. Transmission electron microscopy of freeze-fracture replicas and fluorescence microscopy show that gap junctions between somatic cells and germ cells are more extensive than previously appreciated and are found throughout the gonad. One class of gap junctions, composed of INX-8 and INX-9 in the soma and INX-14 and INX-21 in the germ line, is required for the proliferation and differentiation of germline stem cells. Genetic epistasis experiments establish a role for these gap junction channels in germline proliferation independent of the glp-1/Notch pathway. A second class of gap junctions, composed of somatic INX-8 and INX-9 and germline INX-14 and INX-22, is required for the negative regulation of oocyte meiotic maturation. Rescue of gap junction channel formation in the stem cell niche rescues germline proliferation and uncovers a later channel requirement for embryonic viability. This analysis reveals gap junctions as a central organizing feature of many soma–germline interactions in C. elegans. PMID:25195067
NASA Astrophysics Data System (ADS)
Hribernik, Božo
1984-02-01
This paper describes an iterative algorithm for the simulation of various real magnetic materials in a small induction motor and their influence on the flux distribution in the air gap. Two standard materials, fully-, and semi-processed steel strips were used. The nonlinearity of the magnetization curve, the influence of cutting strains and magnetic anisotropy are also considered. All these influences bring out the facts that the uniformly rotated and sine form exitation causes a nonuniformly rotated and deformed magnetic field in the air gap of the machine and that the magnetization current is winding place dependent.
76 FR 23996 - North Pacific Fishery Management Council Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-29
... uncertainty/total catch accounting; review/approve Halibut Mortality on trawlers Exempted Fishing Permit (EFP... & Wildlife Service Report. 2. Catch Sharing Plan(CSP): Review CSP size limit algorithm. 3. BSAI Crab Draft Stock Assessment Fishery Evaluation report: Review and approve catch specifications for Norton Sound Red...
UAV Control on the Basis of 3D Landmark Bearing-Only Observations.
Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry
2015-11-27
The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks' position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations.
Constellation design with geometric and probabilistic shaping
NASA Astrophysics Data System (ADS)
Zhang, Shaoliang; Yaman, Fatih
2018-02-01
A systematic study, including theory, simulation and experiments, is carried out to review the generalized pairwise optimization algorithm for designing optimized constellation. In order to verify its effectiveness, the algorithm is applied in three testing cases: 2-dimensional 8 quadrature amplitude modulation (QAM), 4-dimensional set-partitioning QAM, and probabilistic-shaped (PS) 32QAM. The results suggest that geometric shaping can work together with PS to further bridge the gap toward the Shannon limit.
The Gap Procedure: for the identification of phylogenetic clusters in HIV-1 sequence data.
Vrbik, Irene; Stephens, David A; Roger, Michel; Brenner, Bluma G
2015-11-04
In the context of infectious disease, sequence clustering can be used to provide important insights into the dynamics of transmission. Cluster analysis is usually performed using a phylogenetic approach whereby clusters are assigned on the basis of sufficiently small genetic distances and high bootstrap support (or posterior probabilities). The computational burden involved in this phylogenetic threshold approach is a major drawback, especially when a large number of sequences are being considered. In addition, this method requires a skilled user to specify the appropriate threshold values which may vary widely depending on the application. This paper presents the Gap Procedure, a distance-based clustering algorithm for the classification of DNA sequences sampled from individuals infected with the human immunodeficiency virus type 1 (HIV-1). Our heuristic algorithm bypasses the need for phylogenetic reconstruction, thereby supporting the quick analysis of large genetic data sets. Moreover, this fully automated procedure relies on data-driven gaps in sorted pairwise distances to infer clusters, thus no user-specified threshold values are required. The clustering results obtained by the Gap Procedure on both real and simulated data, closely agree with those found using the threshold approach, while only requiring a fraction of the time to complete the analysis. Apart from the dramatic gains in computational time, the Gap Procedure is highly effective in finding distinct groups of genetically similar sequences and obviates the need for subjective user-specified values. The clusters of genetically similar sequences returned by this procedure can be used to detect patterns in HIV-1 transmission and thereby aid in the prevention, treatment and containment of the disease.
Can trained lay providers perform HIV testing services? A review of national HIV testing policies.
Flynn, David E; Johnson, Cheryl; Sands, Anita; Wong, Vincent; Figueroa, Carmen; Baggaley, Rachel
2017-01-04
Only an estimated 54% of people living with HIV are aware of their status. Despite progress scaling up HIV testing services (HTS), a testing gap remains. Delivery of HTS by lay providers may help close this testing gap, while also increasing uptake and acceptability of HIV testing among key populations and other priority groups. 50 National HIV testing policies were collated from WHO country intelligence databases, contacts and testing program websites. Data regarding lay provider use for HTS was extracted and collated. Our search had no geographical or language restrictions. This data was then compared with reported data from the Global AIDS Response Progress Reporting (GARPR) from July 2015. Forty-two percent of countries permit lay providers to perform HIV testing and 56% permit lay providers to administer pre-and post-test counseling. Comparative analysis with GARPR found that less than half (46%) of reported data from countries were consistent with their corresponding national HIV testing policy. Given the low uptake of lay provider use globally and their proven use in increasing HIV testing, countries should consider revising policies to support lay provider testing using rapid diagnostic tests.
Symmetric log-domain diffeomorphic Registration: a demons-based approach.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2008-01-01
Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.
Extensions to Polychain: Nonseparability Testing and Factoring Algorithm.
1985-12-02
Cientifico e Tecnologico - CNPq, Brazil. Reproduction in wholeI or in part is permitted for any purpose of the United States Government. A...supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnol6gico - CNPq, Brazil, and the Office of Naval Research, under contract N00014-85-K
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan
2010-01-01
Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan
2010-01-01
Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.
Differential evolution-simulated annealing for multiple sequence alignment
NASA Astrophysics Data System (ADS)
Addawe, R. C.; Addawe, J. M.; Sueño, M. R. K.; Magadia, J. C.
2017-10-01
Multiple sequence alignments (MSA) are used in the analysis of molecular evolution and sequence structure relationships. In this paper, a hybrid algorithm, Differential Evolution - Simulated Annealing (DESA) is applied in optimizing multiple sequence alignments (MSAs) based on structural information, non-gaps percentage and totally conserved columns. DESA is a robust algorithm characterized by self-organization, mutation, crossover, and SA-like selection scheme of the strategy parameters. Here, the MSA problem is treated as a multi-objective optimization problem of the hybrid evolutionary algorithm, DESA. Thus, we name the algorithm as DESA-MSA. Simulated sequences and alignments were generated to evaluate the accuracy and efficiency of DESA-MSA using different indel sizes, sequence lengths, deletion rates and insertion rates. The proposed hybrid algorithm obtained acceptable solutions particularly for the MSA problem evaluated based on the three objectives.
Demodulation algorithm for optical fiber F-P sensor.
Yang, Huadong; Tong, Xinglin; Cui, Zhang; Deng, Chengwei; Guo, Qian; Hu, Pan
2017-09-10
The demodulation algorithm is very important to improving the measurement accuracy of a sensing system. In this paper, the variable step size hill climbing search method will be initially used for the optical fiber Fabry-Perot (F-P) sensing demodulation algorithm. Compared with the traditional discrete gap transformation demodulation algorithm, the computation is greatly reduced by changing step size of each climb, which could achieve nano-scale resolution, high measurement accuracy, high demodulation rates, and large dynamic demodulation range. An optical fiber F-P pressure sensor based on micro-electro-mechanical system (MEMS) has been fabricated to carry out the experiment, and the results show that the resolution of the algorithm can reach nano-scale level, the sensor's sensitivity is about 2.5 nm/KPa, which is similar to the theoretical value, and this sensor has great reproducibility.
Atmospheric constituent density profiles from full disk solar occultation experiments
NASA Technical Reports Server (NTRS)
Lumpe, J. D.; Chang, C. S.; Strickland, D. J.
1991-01-01
Mathematical methods are described which permit the derivation of the number of density profiles of atmospheric constituents from solar occultation measurements. The algorithm is first applied to measurements corresponding to an arbitrary solar-intensity distribution to calculate the normalized absorption profile. The application of Fourier transform to the integral equation yields a precise expression for the corresponding number density, and the solution is employed with the data given in the form of Laguerre polynomials. The algorithm is employed to calculate the results for the case of uniform distribution of solar intensity, and the results demonstrate the convergence properties of the method. The algorithm can be used to effectively model representative model-density profiles with constant and altitude-dependent scale heights.
A Taylor weak-statement algorithm for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Baker, A. J.; Kim, J. W.
1987-01-01
Finite element analysis, applied to computational fluid dynamics (CFD) problem classes, presents a formal procedure for establishing the ingredients of a discrete approximation numerical solution algorithm. A classical Galerkin weak-statement formulation, formed on a Taylor series extension of the conservation law system, is developed herein that embeds a set of parameters eligible for constraint according to specification of suitable norms. The derived family of Taylor weak statements is shown to contain, as special cases, over one dozen independently derived CFD algorithms published over the past several decades for the high speed flow problem class. A theoretical analysis is completed that facilitates direct qualitative comparisons. Numerical results for definitive linear and nonlinear test problems permit direct quantitative performance comparisons.
Meterological correction of optical beam refraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukin, V.P.; Melamud, A.E.; Mironov, V.L.
1986-02-01
At the present time laser reference systems (LRS's) are widely used in agrotechnology and in geodesy. The demands for accuracy in LRS's constantly increase, so that a study of error sources and means of considering and correcting them is of practical importance. A theoretical algorithm is presented for correction of the regular component of atmospheric refraction for various types of hydrostatic stability of the atmospheric layer adjacent to the earth. The algorithm obtained is compared to regression equations obtained by processing an experimental data base. It is shown that within admissible accuracy limits the refraction correction algorithm obtained permits constructionmore » of correction tables and design of optical systems with programmable correction for atmospheric refraction on the basis of rapid meteorological measurements.« less
Algorithm for measuring the internal quantum efficiency of individual injection lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sommers, H.S. Jr.
1978-05-01
A new algorithm permits determination of the internal quantum efficiency eta/sub i/ of individual lasers. Above threshold, the current is partitioned into a ''coherent'' component driving the lasing modes and the ''noncoherent'' remainder. Below threshold the current is known to grow as exp(qV/n/sub 0/KT); the algorithm proposes that extrapolation of this equation into the lasing region measures the noncoherent remainder, enabling deduction of the coherent component and of its current derivative eta/sub i/. Measurements on five (AlGa)As double-heterojunction lasers cut from one wafer demonstrate the power of the new method. Comparison with band calculations of Stern shows that n/sub 0/more » originates in carrier degeneracy.« less
NASA Astrophysics Data System (ADS)
Nurhuda, Maryam; Aziz Majidi, Muhammad
2018-04-01
The role of excitons in semiconducting materials carries potential applications. Experimental results show that excitonic signals also appear in optical absorption spectra of semiconductor system with narrow gap, such as Gallium Arsenide (GaAs). While on the theoretical side, calculation of optical spectra based purely on Density Functional Theory (DFT) without taking electron-hole (e-h) interactions into account does not lead to the appearance of any excitonic signal. Meanwhile, existing DFT-based algorithms that include a full vertex correction through Bethe-Salpeter equation may reveal an excitonic signal, but the algorithm has not provided a way to analyze the excitonic signal further. Motivated to provide a way to isolate the excitonic effect in the optical response theoretically, we develop a method of calculation for the optical conductivity of a narrow band-gap semiconductor GaAs within the 8-band k.p model that includes electron-hole interactions through first-order electron-hole vertex correction. Our calculation confirms that the first-order e-h vertex correction reveals excitonic signal around 1.5 eV (the band gap edge), consistent with the experimental data.
Climatology of convective showers dynamics in a convection-permitting model
NASA Astrophysics Data System (ADS)
Brisson, Erwan; Brendel, Christoph; Ahrens, Bodo
2017-04-01
Convection-permitting simulations have proven their usefulness in improving both the representation of convective rain and the uncertainty range of climate projections. However, most studies have focused on temporal scales greater or equal to convection cell lifetime. A large knowledge gap remains on the model's performance in representing the temporal dynamic of convective showers and how could this temporal dynamic be altered in a warmer climate. In this study, we proposed to fill this gap by analyzing 5-minute convection-permitting model (CPM) outputs. In total, more than 1200 one-day cases are simulated at the resolution of 0.01° using the regional climate model COSMO-CLM over central Europe. The analysis follows a Lagrangian approach and consists of tracking showers characterized by five-minute intensities greater than 20 mm/hour. The different features of these showers (e.g., temporal evolution, horizontal speed, lifetime) are investigated. These features as modeled by an ERA-Interim forced simulation are evaluated using a radar dataset for the period 2004-2010. The model shows good performance in representing most features observed in the radar dataset. Besides, the observed relation between the temporal evolution of precipitation and temperature are well reproduced by the CPM. In a second modeling experiment, the impact of climate change on convective cell features are analyzed based on an EC-Earth RCP8.5 forced simulation for the period 2071-2100. First results show only minor changes in the temporal structure and size of showers. The increase in convective precipitation found in previous studies seems to be mainly due to an increase in the number of convective cells.
Baldewijns, Greet; Debard, Glen; Mertes, Gert; Vanrumste, Bart; Croonenborghs, Tom
2016-03-01
Fall incidents are an important health hazard for older adults. Automatic fall detection systems can reduce the consequences of a fall incident by assuring that timely aid is given. The development of these systems is therefore getting a lot of research attention. Real-life data which can help evaluate the results of this research is however sparse. Moreover, research groups that have this type of data are not at liberty to share it. Most research groups thus use simulated datasets. These simulation datasets, however, often do not incorporate the challenges the fall detection system will face when implemented in real-life. In this Letter, a more realistic simulation dataset is presented to fill this gap between real-life data and currently available datasets. It was recorded while re-enacting real-life falls recorded during previous studies. It incorporates the challenges faced by fall detection algorithms in real life. A fall detection algorithm from Debard et al. was evaluated on this dataset. This evaluation showed that the dataset possesses extra challenges compared with other publicly available datasets. In this Letter, the dataset is discussed as well as the results of this preliminary evaluation of the fall detection algorithm. The dataset can be downloaded from www.kuleuven.be/advise/datasets.
Suram, Santosh K.; Xue, Yexiang; Bai, Junwen; ...
2016-11-21
Rapid construction of phase diagrams is a central tenet of combinatorial materials science with accelerated materials discovery efforts often hampered by challenges in interpreting combinatorial X-ray diffraction data sets, which we address by developing AgileFD, an artificial intelligence algorithm that enables rapid phase mapping from a combinatorial library of X-ray diffraction patterns. AgileFD models alloying-based peak shifting through a novel expansion of convolutional nonnegative matrix factorization, which not only improves the identification of constituent phases but also maps their concentration and lattice parameter as a function of composition. By incorporating Gibbs’ phase rule into the algorithm, physically meaningful phase mapsmore » are obtained with unsupervised operation, and more refined solutions are attained by injecting expert knowledge of the system. The algorithm is demonstrated through investigation of the V–Mn–Nb oxide system where decomposition of eight oxide phases, including two with substantial alloying, provides the first phase map for this pseudoternary system. This phase map enables interpretation of high-throughput band gap data, leading to the discovery of new solar light absorbers and the alloying-based tuning of the direct-allowed band gap energy of MnV 2O 6. Lastly, the open-source family of AgileFD algorithms can be implemented into a broad range of high throughput workflows to accelerate materials discovery.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suram, Santosh K.; Xue, Yexiang; Bai, Junwen
Rapid construction of phase diagrams is a central tenet of combinatorial materials science with accelerated materials discovery efforts often hampered by challenges in interpreting combinatorial X-ray diffraction data sets, which we address by developing AgileFD, an artificial intelligence algorithm that enables rapid phase mapping from a combinatorial library of X-ray diffraction patterns. AgileFD models alloying-based peak shifting through a novel expansion of convolutional nonnegative matrix factorization, which not only improves the identification of constituent phases but also maps their concentration and lattice parameter as a function of composition. By incorporating Gibbs’ phase rule into the algorithm, physically meaningful phase mapsmore » are obtained with unsupervised operation, and more refined solutions are attained by injecting expert knowledge of the system. The algorithm is demonstrated through investigation of the V–Mn–Nb oxide system where decomposition of eight oxide phases, including two with substantial alloying, provides the first phase map for this pseudoternary system. This phase map enables interpretation of high-throughput band gap data, leading to the discovery of new solar light absorbers and the alloying-based tuning of the direct-allowed band gap energy of MnV 2O 6. Lastly, the open-source family of AgileFD algorithms can be implemented into a broad range of high throughput workflows to accelerate materials discovery.« less
Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.
Wei, Qinglai; Liu, Derong; Lin, Hanquan
2016-03-01
In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
Generation and assessment of turntable SAR data for the support of ATR development
NASA Astrophysics Data System (ADS)
Cohen, Marvin N.; Showman, Gregory A.; Sangston, K. James; Sylvester, Vincent B.; Gostin, Lamar; Scheer, C. Ruby
1998-10-01
Inverse synthetic aperture radar (ISAR) imaging on a turntable-tower test range permits convenient generation of high resolution two-dimensional images of radar targets under controlled conditions for testing SAR image processing and for supporting automatic target recognition (ATR) algorithm development. However, turntable ISAR images are often obtained under near-field geometries and hence may suffer geometric distortions not present in airborne SAR images. In this paper, turntable data collected at Georgia Tech's Electromagnetic Test Facility are used to begin to assess the utility of two- dimensional ISAR imaging algorithms in forming images to support ATR development. The imaging algorithms considered include a simple 2D discrete Fourier transform (DFT), a 2-D DFT with geometric correction based on image domain resampling, and a computationally-intensive geometric matched filter solution. Images formed with the various algorithms are used to develop ATR templates, which are then compared with an eye toward utilization in an ATR algorithm.
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel; Hogg, David W.; Lang, Dustin; Goodman, Jonathan
2013-03-01
We introduce a stable, well tested Python implementation of the affine-invariant ensemble sampler for Markov chain Monte Carlo (MCMC) proposed by Goodman & Weare (2010). The code is open source and has already been used in several published projects in the astrophysics literature. The algorithm behind emcee has several advantages over traditional MCMC sampling methods and it has excellent performance as measured by the autocorrelation time (or function calls per independent sample). One major advantage of the algorithm is that it requires hand-tuning of only 1 or 2 parameters compared to ˜N2 for a traditional algorithm in an N-dimensional parameter space. In this document, we describe the algorithm and the details of our implementation. Exploiting the parallelism of the ensemble method, emcee permits any user to take advantage of multiple CPU cores without extra effort. The code is available online at http://dan.iel.fm/emcee under the GNU General Public License v2.
Algorithm to determine the percolation largest component in interconnected networks.
Schneider, Christian M; Araújo, Nuno A M; Herrmann, Hans J
2013-04-01
Interconnected networks have been shown to be much more vulnerable to random and targeted failures than isolated ones, raising several interesting questions regarding the identification and mitigation of their risk. The paradigm to address these questions is the percolation model, where the resilience of the system is quantified by the dependence of the size of the largest cluster on the number of failures. Numerically, the major challenge is the identification of this cluster and the calculation of its size. Here, we propose an efficient algorithm to tackle this problem. We show that the algorithm scales as O(NlogN), where N is the number of nodes in the network, a significant improvement compared to O(N(2)) for a greedy algorithm, which permits studying much larger networks. Our new strategy can be applied to any network topology and distribution of interdependencies, as well as any sequence of failures.
A Linear Bicharacteristic FDTD Method
NASA Technical Reports Server (NTRS)
Beggs, John H.
2001-01-01
The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics [1]-[7]. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to treat the outer computational boundaries naturally using the exact compatibility equations. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional freespace electromagnetic propagation and scattering problems [3], [6], [7]. This paper extends the LBS to model lossy dielectric and magnetic materials. Results are presented for several one-dimensional model problems, and the FDTD algorithm is chosen as a convenient reference for comparison.
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Test Suite for 3D Radiative Hydrodynamics Simulations of Protoplanetary Disks
NASA Astrophysics Data System (ADS)
Boley, Aaron C.; Durisen, R. H.; Nordlund, A.; Lord, J.
2006-12-01
Radiative hydrodynamics simulations of protoplanetary disks with different treatments for radiative cooling demonstrate disparate evolutions (see Durisen et al. 2006, PPV chapter). Some of these differences include the effects of convection and metallicity on disk cooling and the susceptibility of the disk to fragmentation. Because a principal reason for these differences may be the treatment of radiative cooling, the accuracy of cooling algorithms must be evaluated. In this paper we describe a radiative transport test suite, and we challenge all researchers who use radiative hydrodynamics to study protoplanetary disk evolution to evaluate their algorithms with these tests. The test suite can be used to demonstrate an algorithm's accuracy in transporting the correct flux through an atmosphere and in reaching the correct temperature structure, to test the algorithm's dependence on resolution, and to determine whether the algorithm permits of inhibits convection when expected. In addition, we use this test suite to demonstrate the accuracy of a newly developed radiative cooling algorithm that combines vertical rays with flux-limited diffusion. This research was supported in part by a Graduate Student Researchers Program fellowship.
How accurate is automated gap filling of metabolic models?
Karp, Peter D; Weaver, Daniel; Latendresse, Mario
2018-06-19
Reaction gap filling is a computational technique for proposing the addition of reactions to genome-scale metabolic models to permit those models to run correctly. Gap filling completes what are otherwise incomplete models that lack fully connected metabolic networks. The models are incomplete because they are derived from annotated genomes in which not all enzymes have been identified. Here we compare the results of applying an automated likelihood-based gap filler within the Pathway Tools software with the results of manually gap filling the same metabolic model. Both gap-filling exercises were applied to the same genome-derived qualitative metabolic reconstruction for Bifidobacterium longum subsp. longum JCM 1217, and to the same modeling conditions - anaerobic growth under four nutrients producing 53 biomass metabolites. The solution computed by the gap-filling program GenDev contained 12 reactions, but closer examination showed that solution was not minimal; two of the twelve reactions can be removed to yield a set of ten reactions that enable model growth. The manually curated solution contained 13 reactions, eight of which were shared with the 12-reaction computed solution. Thus, GenDev achieved recall of 61.5% and precision of 66.6%. These results suggest that although computational gap fillers are populating metabolic models with significant numbers of correct reactions, automatically gap-filled metabolic models also contain significant numbers of incorrect reactions. Our conclusion is that manual curation of gap-filler results is needed to obtain high-accuracy models. Many of the differences between the manual and automatic solutions resulted from using expert biological knowledge to direct the choice of reactions within the curated solution, such as reactions specific to the anaerobic lifestyle of B. longum.
Starich, Todd A; Hall, David H; Greenstein, David
2014-11-01
In all animals examined, somatic cells of the gonad control multiple biological processes essential for germline development. Gap junction channels, composed of connexins in vertebrates and innexins in invertebrates, permit direct intercellular communication between cells and frequently form between somatic gonadal cells and germ cells. Gap junctions comprise hexameric hemichannels in apposing cells that dock to form channels for the exchange of small molecules. Here we report essential roles for two classes of gap junction channels, composed of five innexin proteins, in supporting the proliferation of germline stem cells and gametogenesis in the nematode Caenorhabditis elegans. Transmission electron microscopy of freeze-fracture replicas and fluorescence microscopy show that gap junctions between somatic cells and germ cells are more extensive than previously appreciated and are found throughout the gonad. One class of gap junctions, composed of INX-8 and INX-9 in the soma and INX-14 and INX-21 in the germ line, is required for the proliferation and differentiation of germline stem cells. Genetic epistasis experiments establish a role for these gap junction channels in germline proliferation independent of the glp-1/Notch pathway. A second class of gap junctions, composed of somatic INX-8 and INX-9 and germline INX-14 and INX-22, is required for the negative regulation of oocyte meiotic maturation. Rescue of gap junction channel formation in the stem cell niche rescues germline proliferation and uncovers a later channel requirement for embryonic viability. This analysis reveals gap junctions as a central organizing feature of many soma-germline interactions in C. elegans. Copyright © 2014 by the Genetics Society of America.
Cervera, Javier; Meseguer, Salvador; Mafe, Salvador
2017-08-17
We have studied theoretically the microRNA (miRNA) intercellular transfer through voltage-gated gap junctions in terms of a biophysically grounded system of coupled differential equations. Instead of modeling a specific system, we use a general approach describing the interplay between the genetic mechanisms and the single-cell electric potentials. The dynamics of the multicellular ensemble are simulated under different conditions including spatially inhomogeneous transcription rates and local intercellular transfer of miRNAs. These processes result in spatiotemporal changes of miRNA, mRNA, and ion channel protein concentrations that eventually modify the bioelectrical states of small multicellular domains because of the ensemble average nature of the electrical potential. The simulations allow a qualitative understanding of the context-dependent nature of the effects observed when specific signaling molecules are transferred through gap junctions. The results suggest that an efficient miRNA intercellular transfer could permit the spatiotemporal control of small cellular domains by the conversion of single-cell genetic and bioelectric states into multicellular states regulated by the gap junction interconnectivity.
Ricken, Roland; Wiethoff, Katja; Reinhold, Thomas; Schietsch, Kathrin; Stamm, Thomas; Kiermeir, Julia; Neu, Peter; Heinz, Andreas; Bauer, Michael; Adli, Mazda
2011-11-01
The German Algorithm Project, Phase 2 (GAP2) revealed that a standardized stepwise treatment regimen (SSTR) results in better treatment outcomes than treatment as usual (TAU) in depressed inpatients. The objective of this study was a health economic evaluation of SSTR based on a cost effectiveness analysis (CEA). GAP2 was a randomized controlled study with 148 patients. In an intention to treat (ITT) analysis direct treatment costs for study duration (SD) and total time in hospital (TTH; enrolment to discharge) were calculated based on daily hospital charges followed by a CEA to calculate cost expenditure per remitted patient. Treatment costs in SSTR compared to TAU were significantly lower for SD (SSTR: 10 830 € ± 8 632 €, TAU: 15 202 € ± 12 483 €; p = 0.026) and did not differ significantly for TTH (SSTR: 21 561 € ± 16 162 €; TAU: 18 248 € ± 13 454; p = 0.208). CEA revealed that the costs per remission in SSTR were significantly lower for SD (SSTR: 20 035 € ± 15 970 €; SSTR: 38 793 € ± 31 853 €; p<0.0001) and TTH (SSTR: 31 285 € ± 23 451 €; TAU: 38 581 € ± 28 449 €, p = 0.041). Indirect costs were not assessed. Different dropout rates in TAU and SSTR complicated interpretation of data. An SSTR-based algorithm results in a superior cost effectiveness at no significant extra costs. Implementation of treatment algorithms in inpatient-care may help reduce treatment costs. Copyright © 2011 Elsevier B.V. All rights reserved.
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Design of air-gapped magnetic-core inductors for superimposed direct and alternating currents
NASA Technical Reports Server (NTRS)
Ohri, A. K.; Wilson, T. G.; Owen, H. A., Jr.
1976-01-01
Using data on standard magnetic-material properties and standard core sizes for air-gap-type cores, an algorithm designed for a computer solution is developed which optimally determines the air-gap length and locates the quiescent point on the normal magnetization curve so as to yield an inductor design with the minimum number of turns for a given ac voltage and frequency and with a given dc bias current superimposed in the same winding. Magnetic-material data used in the design are the normal magnetization curve and a family of incremental permeability curves. A second procedure, which requires a simpler set of calculations, starts from an assigned quiescent point on the normal magnetization curve and first screens candidate core sizes for suitability, then determines the required turns and air-gap length.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases. PMID:24625699
Detection of alpha radiation in a beta radiation field
Mohagheghi, Amir H.; Reese, Robert P.
2001-01-01
An apparatus and method for detecting alpha particles in the presence of high activities of beta particles utilizing an alpha spectrometer. The apparatus of the present invention utilizes a magnetic field applied around the sample in an alpha spectrometer to deflect the beta particles from the sample prior to reaching the detector, thus permitting detection of low concentrations of alpha particles. In the method of the invention, the strength of magnetic field required to adequately deflect the beta particles and permit alpha particle detection is given by an algorithm that controls the field strength as a function of sample beta energy and the distance of the sample to the detector.
Regulating danger on the highways: hours of service regulations.
Mansfield, Daniel; Kryger, Meir
2015-12-01
Current hours of service regulations governing commercial truck drivers in place in the United States, Canada, Australia, and the European Union are summarized and compared to facilitate the assessment of the effectiveness of such provisions in preventing fatigue and drowsiness among truck drivers. Current hours of service provisions governing commercial truck drivers were derived from governmental sources. The commercial truck driver hours of service provisions in the United States, Canada, and the European Union permit drivers to work 14 hours and those of Australia permit drivers to work 12 hours a day on a regular basis. The regulations do not state what a driver may do with time off. They are consistent with a driver being able to drive after 24 hours without sleep. They do not take into account circadian rhythm by linking driving or rest to time of day. Current hours of service regulations governing commercial truck drivers leave gaps--permitting drivers to work long hours on a regular basis, permitting driving after no sleep for 24 hours, and failing to take into account the importance of circadian rhythm, endangering the public safety and the truck drivers themselves. Copyright © 2015 National Sleep Foundation. Published by Elsevier Inc. All rights reserved.
Agent-based game theory modeling for driverless vehicles at intersections.
DOT National Transportation Integrated Search
2013-02-01
This report presents three research efforts that were published in various journals. The first research effort presents a reactive-driving agent based algorithm for modeling driver left turn gap acceptance behavior at signalized intersections. This m...
Algorithm Engineering: Concepts and Practice
NASA Astrophysics Data System (ADS)
Chimani, Markus; Klein, Karsten
Over the last years the term algorithm engineering has become wide spread synonym for experimental evaluation in the context of algorithm development. Yet it implies even more. We discuss the major weaknesses of traditional "pen and paper" algorithmics and the ever-growing gap between theory and practice in the context of modern computer hardware and real-world problem instances. We present the key ideas and concepts of the central algorithm engineering cycle that is based on a full feedback loop: It starts with the design of the algorithm, followed by the analysis, implementation, and experimental evaluation. The results of the latter can then be reused for modifications to the algorithmic design, stronger or input-specific theoretic performance guarantees, etc. We describe the individual steps of the cycle, explaining the rationale behind them and giving examples of how to conduct these steps thoughtfully. Thereby we give an introduction to current algorithmic key issues like I/O-efficient or parallel algorithms, succinct data structures, hardware-aware implementations, and others. We conclude with two especially insightful success stories—shortest path problems and text search—where the application of algorithm engineering techniques led to tremendous performance improvements compared with previous state-of-the-art approaches.
Narrowband resonant transmitter
Hutchinson, Donald P.; Simpson, Marcus L.; Simpson, John T.
2004-06-29
A transverse-longitudinal integrated optical resonator (TLIR) is disclosed which includes a waveguide, a first and a second subwavelength resonant grating in the waveguide, and at least one photonic band gap resonant structure (PBG) in the waveguide. The PBG is positioned between the first and second subwavelength resonant gratings. An electro-optic waveguide material may be used to permit tuning the TLIR and to permit the TLIR to perform signal modulation and switching. The TLIR may be positioned on a bulk substrate die with one or more electronic and optical devices and may be communicably connected to the same. A method for fabricating a TLIR including fabricating a broadband reflective grating is disclosed. A method for tuning the TLIR's transmission resonance wavelength is also disclosed.
Transverse-longitudinal integrated resonator
Hutchinson, Donald P [Knoxville, TN; Simpson, Marcus L [Knoxville, TN; Simpson, John T [Knoxville, TN
2003-03-11
A transverse-longitudinal integrated optical resonator (TLIR) is disclosed which includes a waveguide, a first and a second subwavelength resonant grating in the waveguide, and at least one photonic band gap resonant structure (PBG) in the waveguide. The PBG is positioned between the first and second subwavelength resonant gratings. An electro-optic waveguide material may be used to permit tuning the TLIR and to permit the TLIR to perform signal modulation and switching. The TLIR may be positioned on a bulk substrate die with one or more electronic and optical devices and may be communicably connected to the same. A method for fabricating a TLIR including fabricating a broadband reflective grating is disclosed. A method for tuning the TLIR's transmission resonance wavelength is also disclosed.
Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J
2016-06-01
The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Algorithms and programming tools for image processing on the MPP:3
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1987-01-01
This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.
Techniques for the Analysis of Spectral and Orbital Congestion in Space Systems.
1984-03-01
Appendix 29 gives the appropriate equations ... .. - 87 - for the two cases, and provides algorithms for polarization isolation, I topocentric and geocentric ...The PDP form is maintained by MITRE Dept. D97, which provides services to run the program when staffing permits. NASA Lewis has used the results in a
Generating Hierarchical Document Indices from Common Denominators in Large Document Collections.
ERIC Educational Resources Information Center
O'Kane, Kevin C.
1996-01-01
Describes an algorithm for computer generation of hierarchical indexes for document collections. The resulting index, when presented with a graphical interface, provides users with a view of a document collection that permits general browsing and informal search activities via an access method that requires no keyboard entry or prior knowledge of…
Development of machine-vision system for gap inspection of muskmelon grafted seedlings.
Liu, Siyao; Xing, Zuochang; Wang, Zifan; Tian, Subo; Jahun, Falalu Rabiu
2017-01-01
Grafting robots have been developed in the world, but some auxiliary works such as gap-inspecting for grafted seedlings still need to be done by human. An machine-vision system of gap inspection for grafted muskmelon seedlings was developed in this study. The image acquiring system consists of a CCD camera, a lens and a front white lighting source. The image of inspected gap was processed and analyzed by software of HALCON 12.0. The recognition algorithm for the system is based on principle of deformable template matching. A template should be created from an image of qualified grafted seedling gap. Then the gap image of the grafted seedling will be compared with the created template to determine their matching degree. Based on the similarity between the gap image of grafted seedling and the template, the matching degree will be 0 to 1. The less similar for the grafted seedling gap with the template the smaller of matching degree. Thirdly, the gap will be output as qualified or unqualified. If the matching degree of grafted seedling gap and the template is less than 0.58, or there is no match is found, the gap will be judged as unqualified; otherwise the gap will be qualified. Finally, 100 muskmelon seedlings were grafted and inspected to test the gap inspection system. Results showed that the gap inspection machine-vision system could recognize the gap qualification correctly as 98% of human vision. And the inspection speed of this system can reach 15 seedlings·min-1. The gap inspection process in grafting can be fully automated with this developed machine-vision system, and the gap inspection system will be a key step of a fully-automatic grafting robots.
High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects
NASA Technical Reports Server (NTRS)
Schutt-Aine, Jose E.
1996-01-01
The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.
UAV Control on the Basis of 3D Landmark Bearing-Only Observations
Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry
2015-01-01
The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks’ position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations. PMID:26633394
Haemoglobinopathy diagnosis: algorithms, lessons and pitfalls.
Bain, Barbara J
2011-09-01
Diagnosis of haemoglobinopathies, including thalassaemias, can result from either a clinical suspicion of a disorder of globin chain synthesis or from follow-up of an abnormality detected during screening. Screening may be carried out as part of a well defined screening programme or be an ad hoc or opportunistic test. Screening may be preoperative, neonatal, antenatal, preconceptual, premarriage or targeted at specific groups perceived to be at risk. Screening in the setting of haemoglobinopathies may be directed at optimising management of a disorder by early diagnosis, permitting informed reproductive choice or preventing a serious disorder by offering termination of pregnancy. Diagnostic methods and algorithms will differ according to the setting. As the primary test, high performance liquid chromatography is increasingly used and haemoglobin electrophoresis less so with isoelectric focussing being largely confined to screening programmes and referral centres, particularly in newborns. Capillary electrophoresis is being increasingly used. All these methods permit only a presumptive diagnosis with definitive diagnosis requiring either DNA analysis or protein analysis, for example by tandem mass spectrometry. Copyright © 2011 Elsevier Ltd. All rights reserved.
Mango: multiple alignment with N gapped oligos.
Zhang, Zefeng; Lin, Hao; Li, Ming
2008-06-01
Multiple sequence alignment is a classical and challenging task. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state-of-the-art works suffer from the "once a gap, always a gap" phenomenon. Is there a radically new way to do multiple sequence alignment? In this paper, we introduce a novel and orthogonal multiple sequence alignment method, using both multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole and tries to build the alignment vertically, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds have proved significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks, showing that MANGO compares favorably, in both accuracy and speed, against state-of-the-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, ProbConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0, and Kalign 2.0. We have further demonstrated the scalability of MANGO on very large datasets of repeat elements. MANGO can be downloaded at http://www.bioinfo.org.cn/mango/ and is free for academic usage.
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
2006-07-27
9 10 Technical horizon sensors Over the past few years, a remarkable proliferation of designs for micro-aerial vehicles (MAVs) has occurred... photodiode Fig. 15 Fig. 14 Sky scans with a GaP UV pho to dio de a lo ng three vert ical paths. A ngle o f v iew 30 degrees, 50% clo ud co ver, sun at...Australia Email: gert.stange@anu.edu.au A biomimetic algorithm for flight stabilization in airborne vehicles , based on dragonfly ocellar vision
Analytical study of beam handling and emittance control
NASA Astrophysics Data System (ADS)
Thompson, James R.; Sloan, M. L.
1993-12-01
The thrust of our research on beam handling and emittance control was to explore how one might design high current electron accelerators, with the preservation of high beam quality designed as the primary design consideration. We considered high current, induction linacs in the parameter class of the ETA/ATA accelerators at LLNL, but with improvements to the accelerator gap design and other features to permit a significant increase in the deliverable beam brightness. Our approach for beam quality control centered on the use of solenoidal magnetic focusing through such induction accelerators, together with gently-shaped (adiabatic) acceleration gaps. This approach offers several tools for the control of beam quality. The strength and axial variation in the solenoidal magnetic field may be designed, as may the length and shape of the acceleration gaps, the loading of the gaps, and the axial spacing from gap to gap. This research showed that each of these design features may individually be optimized to contribute to improved beam quality control, and by exploiting these features, it appears feasible to produce high current, high energy electron beams possessing breakthrough beam quality and brightness. Applications which have been technologically unachievable may for the first time become possible. One such application is the production of high performance free electron lasers at very short wavelengths, extending down to the optical (less than 1 micron) regime.
NASA Astrophysics Data System (ADS)
Huang, Jie; Li, Piao; Yao, Weixing
2018-05-01
A loosely coupled fluid-structural thermal numerical method is introduced for the thermal protection system (TPS) gap thermal control analysis in this paper. The aerodynamic heating and structural thermal are analyzed by computational fluid dynamics (CFD) and numerical heat transfer (NHT) methods respectively. An interpolation algorithm based on the control surface is adopted for the data exchanges on the coupled surface. In order to verify the analysis precision of the loosely coupled method, a circular tube example was analyzed, and the wall temperature agrees well with the test result. TPS gap thermal control performance was studied by the loosely coupled method successfully. The gap heat flux is mainly distributed in the small region at the top of the gap which is the high temperature region. Besides, TPS gap temperature and the power of the active cooling system (CCS) calculated by the traditional uncoupled method are higher than that calculated by the coupled method obviously. The reason is that the uncoupled method doesn't consider the coupled effect between the aerodynamic heating and structural thermal, however the coupled method considers it, so TPS gap thermal control performance can be analyzed more accurately by the coupled method.
A globally well-posed finite element algorithm for aerodynamics applications
NASA Technical Reports Server (NTRS)
Iannelli, G. S.; Baker, A. J.
1991-01-01
A finite element CFD algorithm is developed for Euler and Navier-Stokes aerodynamic applications. For the linear basis, the resultant approximation is at least second-order-accurate in time and space for synergistic use of three procedures: (1) a Taylor weak statement, which provides for derivation of companion conservation law systems with embedded dispersion-error control mechanisms; (2) a stiffly stable second-order-accurate implicit Rosenbrock-Runge-Kutta temporal algorithm; and (3) a matrix tensor product factorization that permits efficient numerical linear algebra handling of the terminal large-matrix statement. Thorough analyses are presented regarding well-posed boundary conditions for inviscid and viscous flow specifications. Numerical solutions are generated and compared for critical evaluation of quasi-one- and two-dimensional Euler and Navier-Stokes benchmark test problems.
Carbon monoxide mixing ratio inference from gas filter radiometer data
NASA Technical Reports Server (NTRS)
Wallio, H. A.; Reichle, H. G., Jr.; Casas, J. C.; Saylor, M. S.; Gormsen, B. B.
1983-01-01
A new algorithm has been developed which permits, for the first time, real time data reduction of nadir measurements taken with a gas filter correlation radiometer to determine tropospheric carbon monoxide concentrations. The algorithm significantly reduces the complexity of the equations to be solved while providing accuracy comparable to line-by-line calculations. The method is based on a regression analysis technique using a truncated power series representation of the primary instrument output signals to infer directly a weighted average of trace gas concentration. The results produced by a microcomputer-based implementation of this technique are compared with those produced by the more rigorous line-by-line methods. This algorithm has been used in the reduction of Measurement of Air Pollution from Satellites, Shuttle, and aircraft data.
Technique for Chestband Contour Shape-Mapping in Lateral Impact
Hallman, Jason J; Yoganandan, Narayan; Pintar, Frank A
2011-01-01
The chestband transducer permits noninvasive measurement of transverse plane biomechanical response during blunt thorax impact. Although experiments may reveal complex two-dimensional (2D) deformation response to boundary conditions, biomechanical studies have heretofore employed only uniaxial chestband contour quantifying measurements. The present study described and evaluated an algorithm by which source subject-specific contour data may be systematically mapped to a target generalized anthropometry for computational studies of biomechanical response or anthropomorphic test dummy development. Algorithm performance was evaluated using chestband contour datasets from two rigid lateral impact boundary conditions: Flat wall and anterior-oblique wall. Comparing source and target anthropometry contours, peak deflections and deformation-time traces deviated by less than 4%. These results suggest that the algorithm is appropriate for 2D deformation response to lateral impact boundary conditions. PMID:21676399
A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.
Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T
2010-09-01
To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Harding, Lee T.
Isotope identification algorithms that are contained in the Gamma Detector Response and Analysis Software (GADRAS) can be used for real-time stationary measurement and search applications on platforms operating under Linux or Android operating sys-tems. Since the background radiation can vary considerably due to variations in natu-rally-occurring radioactive materials (NORM), spectral algorithms can be substantial-ly more sensitive to threat materials than search algorithms based strictly on count rate. Specific isotopes or interest can be designated for the search algorithm, which permits suppression of alarms for non-threatening sources, such as such as medical radionuclides. The same isotope identification algorithms that are usedmore » for search ap-plications can also be used to process static measurements. The isotope identification algorithms follow the same protocols as those used by the Windows version of GADRAS, so files that are created under the Windows interface can be copied direct-ly to processors on fielded sensors. The analysis algorithms contain provisions for gain adjustment and energy lineariza-tion, which enables direct processing of spectra as they are recorded by multichannel analyzers. Gain compensation is performed by utilizing photopeaks in background spectra. Incorporation of this energy calibration tasks into the analysis algorithm also eliminates one of the more difficult challenges associated with development of radia-tion detection equipment.« less
Quantum algorithms for Gibbs sampling and hitting-time estimation
Chowdhury, Anirban Narayan; Somma, Rolando D.
2017-02-01
In this paper, we present quantum algorithms for solving two problems regarding stochastic processes. The first algorithm prepares the thermal Gibbs state of a quantum system and runs in time almost linear in √Nβ/Ζ and polynomial in log(1/ϵ), where N is the Hilbert space dimension, β is the inverse temperature, Ζ is the partition function, and ϵ is the desired precision of the output state. Our quantum algorithm exponentially improves the dependence on 1/ϵ and quadratically improves the dependence on β of known quantum algorithms for this problem. The second algorithm estimates the hitting time of a Markov chain. Formore » a sparse stochastic matrix Ρ, it runs in time almost linear in 1/(ϵΔ 3/2), where ϵ is the absolute precision in the estimation and Δ is a parameter determined by Ρ, and whose inverse is an upper bound of the hitting time. Our quantum algorithm quadratically improves the dependence on 1/ϵ and 1/Δ of the analog classical algorithm for hitting-time estimation. Finally, both algorithms use tools recently developed in the context of Hamiltonian simulation, spectral gap amplification, and solving linear systems of equations.« less
Three-dimensional tracking for efficient fire fighting in complex situations
NASA Astrophysics Data System (ADS)
Akhloufi, Moulay; Rossi, Lucile
2009-05-01
Each year, hundred millions hectares of forests burn causing human and economic losses. For efficient fire fighting, the personnel in the ground need tools permitting the prediction of fire front propagation. In this work, we present a new technique for automatically tracking fire spread in three-dimensional space. The proposed approach uses a stereo system to extract a 3D shape from fire images. A new segmentation technique is proposed and permits the extraction of fire regions in complex unstructured scenes. It works in the visible spectrum and combines information extracted from YUV and RGB color spaces. Unlike other techniques, our algorithm does not require previous knowledge about the scene. The resulting fire regions are classified into different homogenous zones using clustering techniques. Contours are then extracted and a feature detection algorithm is used to detect interest points like local maxima and corners. Extracted points from stereo images are then used to compute the 3D shape of the fire front. The resulting data permits to build the fire volume. The final model is used to compute important spatial and temporal fire characteristics like: spread dynamics, local orientation, heading direction, etc. Tests conducted on the ground show the efficiency of the proposed scheme. This scheme is being integrated with a fire spread mathematical model in order to predict and anticipate the fire behaviour during fire fighting. Also of interest to fire-fighters, is the proposed automatic segmentation technique that can be used in early detection of fire in complex scenes.
A multi-scale segmentation approach to filling gaps in Landsat ETM+ SLC-off images
Maxwell, S.K.; Schmidt, Gail L.; Storey, James C.
2007-01-01
On 31 May 2003, the Landsat Enhanced Thematic Plus (ETM+) Scan Line Corrector (SLC) failed, causing the scanning pattern to exhibit wedge-shaped scan-to-scan gaps. We developed a method that uses coincident spectral data to fill the image gaps. This method uses a multi-scale segment model, derived from a previous Landsat SLC-on image (image acquired prior to the SLC failure), to guide the spectral interpolation across the gaps in SLC-off images (images acquired after the SLC failure). This paper describes the process used to generate the segment model, provides details of the gap-fill algorithm used in deriving the segment-based gap-fill product, and presents the results of the gap-fill process applied to grassland, cropland, and forest landscapes. Our results indicate this product will be useful for a wide variety of applications, including regional-scale studies, general land cover mapping (e.g. forest, urban, and grass), crop-specific mapping and monitoring, and visual assessments. Applications that need to be cautious when using pixels in the gap areas include any applications that require per-pixel accuracy, such as urban characterization or impervious surface mapping, applications that use texture to characterize landscape features, and applications that require accurate measurements of small or narrow landscape features such as roads, farmsteads, and riparian areas.
Li, Runsheng; Hsieh, Chia-Ling; Young, Amanda; Zhang, Zhihong; Ren, Xiaoliang; Zhao, Zhongying
2015-01-01
Most next-generation sequencing platforms permit acquisition of high-throughput DNA sequences, but the relatively short read length limits their use in genome assembly or finishing. Illumina has recently released a technology called Synthetic Long-Read Sequencing that can produce reads of unusual length, i.e., predominately around 10 Kb. However, a systematic assessment of their use in genome finishing and assembly is still lacking. We evaluate the promise and deficiency of the long reads in these aspects using isogenic C. elegans genome with no gap. First, the reads are highly accurate and capable of recovering most types of repetitive sequences. However, the presence of tandem repetitive sequences prevents pre-assembly of long reads in the relevant genomic region. Second, the reads are able to reliably detect missing but not extra sequences in the C. elegans genome. Third, the reads of smaller size are more capable of recovering repetitive sequences than those of bigger size. Fourth, at least 40 Kbp missing genomic sequences are recovered in the C. elegans genome using the long reads. Finally, an N50 contig size of at least 86 Kbp can be achieved with 24×reads but with substantial mis-assembly errors, highlighting a need for novel assembly algorithm for the long reads. PMID:26039588
Colombet, B; Woodman, M; Badier, J M; Bénar, C G
2015-03-15
The importance of digital signal processing in clinical neurophysiology is growing steadily, involving clinical researchers and methodologists. There is a need for crossing the gap between these communities by providing efficient delivery of newly designed algorithms to end users. We have developed such a tool which both visualizes and processes data and, additionally, acts as a software development platform. AnyWave was designed to run on all common operating systems. It provides access to a variety of data formats and it employs high fidelity visualization techniques. It also allows using external tools as plug-ins, which can be developed in languages including C++, MATLAB and Python. In the current version, plug-ins allow computation of connectivity graphs (non-linear correlation h2) and time-frequency representation (Morlet wavelets). The software is freely available under the LGPL3 license. AnyWave is designed as an open, highly extensible solution, with an architecture that permits rapid delivery of new techniques to end users. We have developed AnyWave software as an efficient neurophysiological data visualizer able to integrate state of the art techniques. AnyWave offers an interface well suited to the needs of clinical research and an architecture designed for integrating new tools. We expect this software to strengthen the collaboration between clinical neurophysiologists and researchers in biomedical engineering and signal processing. Copyright © 2015 Elsevier B.V. All rights reserved.
Inverse problem of the vibrational band gap of periodically supported beam
NASA Astrophysics Data System (ADS)
Shi, Xiaona; Shu, Haisheng; Dong, Fuzhen; Zhao, Lei
2017-04-01
The researches of periodic structures have a long history with the main contents confined in the field of forward problem. In this paper, the inverse problem is considered and an overall frame is proposed which includes two main stages, i.e., the band gap criterion and its optimization. As a preliminary investigation, the inverse problem of the flexural vibrational band gap of a periodically supported beam is analyzed. According to existing knowledge of its forward problem, the band gap criterion is given in implicit form. Then, two cases with three independent parameters, namely the double supported case and the triple one, are studied in detail and the explicit expressions of the feasible domain are constructed by numerical fitting. Finally, the parameter optimization of the double supported case with three variables is conducted using genetic algorithm aiming for the best mean attenuation within specified frequency band.
NASA Astrophysics Data System (ADS)
Krishna, Hemanth; Kumar, Hemantha; Gangadharan, Kalluvalappil
2017-08-01
A magneto rheological (MR) fluid damper offers cost effective solution for semiactive vibration control in an automobile suspension. The performance of MR damper is significantly depends on the electromagnetic circuit incorporated into it. The force developed by MR fluid damper is highly influenced by the magnetic flux density induced in the fluid flow gap. In the present work, optimization of electromagnetic circuit of an MR damper is discussed in order to maximize the magnetic flux density. The optimization procedure was proposed by genetic algorithm and design of experiments techniques. The result shows that the fluid flow gap size less than 1.12 mm cause significant increase of magnetic flux density.
W-curve alignments for HIV-1 genomic comparisons.
Cork, Douglas J; Lembark, Steven; Tovanabutra, Sodsai; Robb, Merlin L; Kim, Jerome H
2010-06-01
The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly. We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison. The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE. Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison technique of aligning extremes of the curves to effectively phase-shift them past the HIV-1 gap problem, is presented. Besides yielding similar neighbor-joining phenogram topologies, most Mother and Infant C2-V5 sequences in the cohort pairs geometrically map closest to each other, indicating that W-curve heuristics overcame any gap problem.
Patterns of significant seismic quiescence in the Pacific Mexican coast
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, Alejandro; Rudolf-Navarro, Adolfo; Barrera-Ferrer, Amilcar; Angulo-Brown, Fernando
2014-05-01
Mexico is one of the countries with higher seismicity. During the 20th century, 8% of all the earthquakes in the world of magnitude greater than or equal to 7.0 have taken place in Mexico. On average, an earthquake of magnitude greater than or equal to 7.0 occurred in Mexico every two and a half years. Great earthquakes in Mexico have their epicenters in the Pacific Coast in which some seismic gaps have been identified; for example, there is a mature gap in the Guerrero State Coast, which potentially can produce an earthquake of magnitude 8.2. With the purpose of making some prognosis, some researchers study the statistical behavior of certain physical parameters that could be related with the process of accumulation of stress in the Earth crust. Other researchers study seismic catalogs trying to find seismicity patterns that are manifested before the occurrence of great earthquakes. Many authors have proposed that the study of seismicity rates is an appropriate technique for evaluating how close a seismic gap may be to rupture. We designed an algorithm for identification of patterns of significant seismic quiescence by using the definition of seismic quiescence proposed by Schreider (1990). This algorithm shows the area of quiescence where an earthquake of great magnitude will probably occur. We apply our algorithm to the earthquake catalogue of the Mexican Pacific coast located between 14 and 21 degrees of North latitude and 94 and 106 degrees West longitude; with depths less or equal to 60 km and magnitude greater or equal to 4.2, which occurred from September, 1965 until December, 2014. We have found significant patterns of seismic quietude before the earthquakes of Oaxaca (November 1978, Mw = 7.8), Petatlán (March 1979, Mw = 7.6), Michoacán (September 1985, Mw = 8.0, and Mw = 7.6) and Colima (October 1995, Mw = 8.0). Fortunately, in this century have not occurred earthquakes of great magnitude in Mexico, however, we have identified well-defined seismic quiescence in the Guerrero seismic-gap, which are apparently correlated with the occurrence of silent earthquakes in 2002, 2006 and 2011 recently discovered by GPS technology. In fact, a possible silent earthquake with Mw =7.6 occurred at this gap in 2002 which lasted for approximately 4 months and was detected by continuous GPS receivers located over an area of ~550x250 square kilometers.
Extreme-scale Algorithms and Solver Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, Jack
A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less
Minimizing the semantic gap in biomedical content-based image retrieval
NASA Astrophysics Data System (ADS)
Guan, Haiying; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2010-03-01
A major challenge in biomedical Content-Based Image Retrieval (CBIR) is to achieve meaningful mappings that minimize the semantic gap between the high-level biomedical semantic concepts and the low-level visual features in images. This paper presents a comprehensive learning-based scheme toward meeting this challenge and improving retrieval quality. The article presents two algorithms: a learning-based feature selection and fusion algorithm and the Ranking Support Vector Machine (Ranking SVM) algorithm. The feature selection algorithm aims to select 'good' features and fuse them using different similarity measurements to provide a better representation of the high-level concepts with the low-level image features. Ranking SVM is applied to learn the retrieval rank function and associate the selected low-level features with query concepts, given the ground-truth ranking of the training samples. The proposed scheme addresses four major issues in CBIR to improve the retrieval accuracy: image feature extraction, selection and fusion, similarity measurements, the association of the low-level features with high-level concepts, and the generation of the rank function to support high-level semantic image retrieval. It models the relationship between semantic concepts and image features, and enables retrieval at the semantic level. We apply it to the problem of vertebra shape retrieval from a digitized spine x-ray image set collected by the second National Health and Nutrition Examination Survey (NHANES II). The experimental results show an improvement of up to 41.92% in the mean average precision (MAP) over conventional image similarity computation methods.
Schleeweis, Karen; Goward, Samuel N.; Huang, Chengquan; Dwyer, John L.; Dungan, Jennifer L.; Lindsey, Mary A.; Michaelis, Andrew; Rishmawi, Khaldoun; Masek, Jeffery G.
2016-01-01
Using the NASA Earth Exchange platform, the North American Forest Dynamics (NAFD) project mapped forest history wall-to-wall, annually for the contiguous US (1986–2010) using the Vegetation Change Tracker algorithm. As with any effort to identify real changes in remotely sensed time-series, data gaps, shifts in seasonality, misregistration, inconsistent radiometry and cloud contamination can be sources of error. We discuss the NAFD image selection and processing stream (NISPS) that was designed to minimize these sources of error. The NISPS image quality assessments highlighted issues with the Landsat archive and metadata including inadequate georegistration, unreliability of the pre-2009 L5 cloud cover assessments algorithm, missing growing-season imagery and paucity of clear views. Assessment maps of Landsat 5–7 image quantities and qualities are presented that offer novel perspectives on the growing-season archive considered for this study. Over 150,000+ Landsat images were considered for the NAFD project. Optimally, one high quality cloud-free image in each year or a total of 12,152 images would be used. However, to accommodate data gaps and cloud/shadow contamination 23,338 images were needed. In 220 specific path-row image years no acceptable images were found resulting in data gaps in the annual national map products.
MANGO: a new approach to multiple sequence alignment.
Zhang, Zefeng; Lin, Hao; Li, Ming
2007-01-01
Multiple sequence alignment is a classical and challenging task for biological sequence analysis. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state of the art multiple sequence alignment programs suffer from the 'once a gap, always a gap' phenomenon. Is there a radically new way to do multiple sequence alignment? This paper introduces a novel and orthogonal multiple sequence alignment method, using multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds are provably significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks showing that MANGO compares favorably, in both accuracy and speed, against state-of-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, Prob-ConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0 and Kalign 2.0.
Impact of dose engine algorithm in pencil beam scanning proton therapy for breast cancer.
Tommasino, Francesco; Fellin, Francesco; Lorentini, Stefano; Farace, Paolo
2018-06-01
Proton therapy for the treatment of breast cancer is acquiring increasing interest, due to the potential reduction of radiation-induced side effects such as cardiac and pulmonary toxicity. While several in silico studies demonstrated the gain in plan quality offered by pencil beam scanning (PBS) compared to passive scattering techniques, the related dosimetric uncertainties have been poorly investigated so far. Five breast cancer patients were planned with Raystation 6 analytical pencil beam (APB) and Monte Carlo (MC) dose calculation algorithms. Plans were optimized with APB and then MC was used to recalculate dose distribution. Movable snout and beam splitting techniques (i.e. using two sub-fields for the same beam entrance, one with and the other without the use of a range shifter) were considered. PTV dose statistics were recorded. The same planning configurations were adopted for the experimental benchmark. Dose distributions were measured with a 2D array of ionization chambers and compared to APB and MC calculated ones by means of a γ analysis (agreement criteria 3%, 3 mm). Our results indicate that, when using proton PBS for breast cancer treatment, the Raystation 6 APB algorithm does not allow obtaining sufficient accuracy, especially with large air gaps. On the contrary, the MC algorithm resulted into much higher accuracy in all beam configurations tested and has to be recommended. Centers where a MC algorithm is not yet available should consider a careful use of APB, possibly combined with a movable snout system or in any case with strategies aimed at minimizing air gaps. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Umphrey, Lisa; Breindahl, Morten; Brown, Alexandra; Saugstad, Ola Didrik; Thio, Marta; Trevisanuto, Daniele; Roehrg, Charles Christoph; Blennow, Mats
2018-05-25
Neonatal resuscitation (NR) combines a set of life-saving interventions in order to stabilize compromised newborns at birth or when critically ill. Médecins Sans Frontières/Doctors Without Borders (MSF), as an international medical-humanitarian organization working particularly in low-resource settings (LRS), assisted over 250,000 births in obstetric and newborn care aid projects in 2016 and provides thousands of newborn resuscitations annually. The Helping Babies Breathe (HBB) program has been used as formal guidance for basic resuscitation since 2012. However, in some MSF projects with the capacity to provide more advanced NR interventions but a lack of adapted guidance, staff have felt prompted to create their own advanced algorithms, which runs counter to the organization's aim for standardized protocols in all aspects of its care. The aim is to close a significant gap in neonatal care provision in LRS by establishing consensus on a protocol that would guide MSF field teams in their practice of more advanced NR. An independent committee of international experts was formed and met regularly from June 2016 to agree on the content and design of a new NR algorithm. Consensus was reached on a novel, mid-level NR algorithm in April 2017. The algorithm was accepted for use by MSF Operational Center Paris. This paper contributes to the literature on decision-making in the development of cognitive aids. The authors also highlight how critical gaps in healthcare delivery in LRS can be addressed, even when there is limited evidence to guide the process. © 2018 The Author(s) Published by S. Karger AG, Basel.
Do air-gaps behind soft body armour affect protection?
Tilsley, Lee; Carr, D J; Lankester, C; Malbon, C
2018-02-01
Body armour typically comprises a fabric garment covering the torso combined with hard armour (ceramic/composite). Some users wear only soft armour which provides protection from sharp weapons and pistol ammunition. It is usually recommended that body armour is worn against the body with no air-gaps being present between the wearer and the armour. However, air-gaps can occur in certain situations such as females around the breasts, in badly fitting armour and where manufacturers have incorporated an air-gap claiming improvements in thermophysiological burden. The effect of an air-gap on the ballistic protection and the back face signature (BFS) as a result of a non-perforating ballistic impact was determined. Armour panels representative of typical police armour (400x400 mm) were mounted on calibrated Roma Plastilina No 1 and impacted with 9 mm Luger FMJ (9×19 mm; full metal jacket; Dynamit Nobel DM11A1B2) ammunition at 365±10 m/s with a range of air-gaps (0-15 mm). Whether or not the ammunition perforated the armour was noted, the BFS was measured and the incidence of pencilling (a severe, deep and narrow BFS) was identified. For 0° impacts, a critical air-gap size of 10 mm is detrimental to armour performance for the armour/ammunition combination assessed in this work. Specifically, the incidences of pencilling were more common with a 10 mm air-gap and resulted in BFS depth:volume ratios ≥1.0. For impacts at 30° the armour was susceptible to perforation irrespective of air-gap. This work suggested that an air-gap behind police body armour might result in an increased likelihood of injury. It is recommended that body armour is worn with no air-gap underneath. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Keynejad, Roxanne C; Dua, Tarun; Barbui, Corrado; Thornicroft, Graham
2018-02-01
Despite mental, neurological and substance use (MNS) disorders being highly prevalent, there is a worldwide gap between service need and provision. WHO launched its Mental Health Gap Action Programme (mhGAP) in 2008, and the Intervention Guide (mhGAP-IG) in 2010. mhGAP-IG provides evidence-based guidance and tools for assessment and integrated management of priority MNS disorders in low and middle-income countries (LMICs), using clinical decision-making protocols. It targets a non-specialised primary healthcare audience, but has also been used by ministries, non-governmental organisations and academics, for mental health service scale-up in 90 countries. This review aimed to identify evidence to date for mhGAP-IG implementation in LMICs. We searched MEDLINE, Embase, PsycINFO, Web of Knowledge/Web of Science, Scopus, CINAHL, LILACS, SciELO/Web of Science, Cochrane, Pubmed databases and Google Scholar for studies reporting evidence, experience or evaluation of mhGAP-IG in LMICs, in any language. Data were extracted from included papers, but heterogeneity prevented meta-analysis. We conducted a systematic review of evidence to date, of mhGAP-IG implementation and evaluation in LMICs. Thirty-three included studies reported 15 training courses, 9 clinical implementations, 3 country contextualisations, 3 economic models, 2 uses as control interventions and 1 use to develop a rating scale. Our review identified the importance of detailed reports of contextual challenges in the field, alongside detailed protocols, qualitative studies and randomised controlled trials. The mhGAP-IG literature is substantial, relative to other published evaluations of clinical practice guidelines: an important contribution to a neglected field. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Digital processing of satellite imagery application to jungle areas of Peru
NASA Technical Reports Server (NTRS)
Pomalaza, J. C. (Principal Investigator); Pomalaza, C. A.; Espinoza, J.
1976-01-01
The author has identified the following significant results. The use of clustering methods permits the development of relatively fast classification algorithms that could be implemented in an inexpensive computer system with limited amount of memory. Analysis of CCTs using these techniques can provide a great deal of detail permitting the use of the maximum resolution of LANDSAT imagery. Potential cases were detected in which the use of other techniques for classification using a Gaussian approximation for the distribution functions can be used with advantage. For jungle areas, channels 5 and 7 can provide enough information to delineate drainage patterns, swamp and wet areas, and make a reasonable broad classification of forest types.
Sanati Nezhad, Amir; Naghavi, Mahsa; Packirisamy, Muthukumaran; Bhat, Rama; Geitmann, Anja
2013-01-01
Tip-growing cells have the unique property of invading living tissues and abiotic growth matrices. To do so, they exert significant penetrative forces. In plant and fungal cells, these forces are generated by the hydrostatic turgor pressure. Using the TipChip, a microfluidic lab-on-a-chip device developed for tip-growing cells, we tested the ability to exert penetrative forces generated in pollen tubes, the fastest-growing plant cells. The tubes were guided to grow through microscopic gaps made of elastic polydimethylsiloxane material. Based on the deformation of the gaps, the force exerted by the elongating tubes to permit passage was determined using finite element methods. The data revealed that increasing mechanical impedance was met by the pollen tubes through modulation of the cell wall compliance and, thus, a change in the force acting on the obstacle. Tubes that successfully passed a narrow gap frequently burst, raising questions about the sperm discharge mechanism in the flowering plants. PMID:23630253
On improving linear solver performance: a block variant of GMRES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A H; Dennis, J M; Jessup, E R
2004-05-10
The increasing gap between processor performance and memory access time warrants the re-examination of data movement in iterative linear solver algorithms. For this reason, we explore and establish the feasibility of modifying a standard iterative linear solver algorithm in a manner that reduces the movement of data through memory. In particular, we present an alternative to the restarted GMRES algorithm for solving a single right-hand side linear system Ax = b based on solving the block linear system AX = B. Algorithm performance, i.e. time to solution, is improved by using the matrix A in operations on groups of vectors.more » Experimental results demonstrate the importance of implementation choices on data movement as well as the effectiveness of the new method on a variety of problems from different application areas.« less
Improving serum calcium test ordering according to a decision algorithm.
Faria, Daniel K; Taniguchi, Leandro U; Fonseca, Luiz A M; Ferreira-Junior, Mario; Aguiar, Francisco J B; Lichtenstein, Arnaldo; Sumita, Nairo M; Duarte, Alberto J S; Sales, Maria M
2018-05-18
To detect differences in the pattern of serum calcium tests ordering before and after the implementation of a decision algorithm. We studied patients admitted to an internal medicine ward of a university hospital on April 2013 and April 2016. Patients were classified as critical or non-critical on the day when each test was performed. Adequacy of ordering was defined according to adherence to a decision algorithm implemented in 2014. Total and ionised calcium tests per patient-day of hospitalisation significantly decreased after the algorithm implementation; and duplication of tests (total and ionised calcium measured in the same blood sample) was reduced by 49%. Overall adequacy of ionised calcium determinations increased by 23% (P=0.0001) due to the increase in the adequacy of ionised calcium ordering in non-critical conditions. A decision algorithm can be a useful educational tool to improve adequacy of the process of ordering serum calcium tests. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Accuracy metrics for judging time scale algorithms
NASA Technical Reports Server (NTRS)
Douglas, R. J.; Boulanger, J.-S.; Jacques, C.
1994-01-01
Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.
Near-optimal quantum circuit for Grover's unstructured search using a transverse field
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Rieffel, Eleanor G.; Wang, Zhihui
2017-06-01
Inspired by a class of algorithms proposed by Farhi et al. (arXiv:1411.4028), namely, the quantum approximate optimization algorithm (QAOA), we present a circuit-based quantum algorithm to search for a needle in a haystack, obtaining the same quadratic speedup achieved by Grover's original algorithm. In our algorithm, the problem Hamiltonian (oracle) and a transverse field are applied alternately to the system in a periodic manner. We introduce a technique, based on spin-coherent states, to analyze the composite unitary in a single period. This composite unitary drives a closed transition between two states that have high degrees of overlap with the initial state and the target state, respectively. The transition rate in our algorithm is of order Θ (1 /√{N }) , and the overlaps are of order Θ (1 ) , yielding a nearly optimal query complexity of T ≃√{N }(π /2 √{2 }) . Our algorithm is a QAOA circuit that demonstrates a quantum advantage with a large number of iterations that is not derived from Trotterization of an adiabatic quantum optimization (AQO) algorithm. It also suggests that the analysis required to understand QAOA circuits involves a very different process from estimating the energy gap of a Hamiltonian in AQO.
Cohen, Andrew R; Bjornsson, Christopher S; Temple, Sally; Banker, Gary; Roysam, Badrinath
2009-08-01
An algorithmic information-theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG). Time courses of object states are compared using an adaptive information distance measure, aided by a closed-form multidimensional quantization. The notion of meaningful summarization is captured by using the gap statistic to estimate the randomness deficiency from algorithmic statistics. The summary is the clustering result and feature subset that maximize the gap statistic. This approach was validated on four bioimaging applications: 1) It was applied to a synthetic data set containing two populations of cells differing in the rate of growth, for which it correctly identified the two populations and the single feature out of 23 that separated them; 2) it was applied to 59 movies of three types of neuroprosthetic devices being inserted in the brain tissue at three speeds each, for which it correctly identified insertion speed as the primary factor affecting tissue strain; 3) when applied to movies of cultured neural progenitor cells, it correctly distinguished neurons from progenitors without requiring the use of a fixative stain; and 4) when analyzing intracellular molecular transport in cultured neurons undergoing axon specification, it automatically confirmed the role of kinesins in axon specification.
Patterns of significant seismic quiescence on the Mexican Pacific coast
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, A.; Rudolf-Navarro, A. H.; Angulo-Brown, F.; Barrera-Ferrer, A. G.
Many authors have proposed that the study of seismicity rates is an appropriate technique for evaluating how close a seismic gap may be to rupture. We designed an algorithm for identification of patterns of significant seismic quiescence by using the definition of seismic quiescence proposed by Schreider (1990). This algorithm shows the area of quiescence where an earthquake of great magnitude may probably occur. We have applied our algorithm to the earthquake catalog on the Mexican Pacific coast located between 14 and 21 degrees of North latitude and 94 and 106 degrees West longitude; with depths less than or equal to 60 km and magnitude greater than or equal to 4.3, which occurred from January, 1965 until December, 2014. We have found significant patterns of seismic quietude before the earthquakes of Oaxaca (November 1978, Mw = 7.8), Petatlán (March 1979, Mw = 7.6), Michoacán (September 1985, Mw = 8.0, and Mw = 7.6) and Colima (October 1995, Mw = 8.0). Fortunately, in this century earthquakes of great magnitude have not occurred in Mexico. However, we have identified well-defined seismic quiescences in the Guerrero seismic-gap, which are apparently correlated with the occurrence of silent earthquakes in 2002, 2006 and 2010 recently discovered by GPS technology.
Demonstration of quantum advantage in machine learning
NASA Astrophysics Data System (ADS)
Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.
2017-04-01
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.
Reducing assembly complexity of microbial genomes with single-molecule sequencing
USDA-ARS?s Scientific Manuscript database
Genome assembly algorithms cannot fully reconstruct microbial chromosomes from the DNA reads output by first or second-generation sequencing instruments. Therefore, most genomes are left unfinished due to the significant resources required to manually close gaps left in the draft assemblies. Single-...
NASA Astrophysics Data System (ADS)
Kaur, Avneet; Bakhshi, A. K.
2010-04-01
The interest in copolymers stems from the fact that they present interesting electronic and optical properties leading to a variety of technological applications. In order to get a suitable copolymer for a specific application, genetic algorithm (GA) along with negative factor counting (NFC) method has recently been used. In this paper, we study the effect of change in the ratio of conduction band discontinuity to valence band discontinuity (Δ Ec/Δ Ev) on the optimum solution obtained from GA for model binary copolymers. The effect of varying bandwidths on the optimum GA solution is also investigated. The obtained results show that the optimum solution changes with varying parameters like band discontinuity and band width of constituent homopolymers. As the ratio Δ Ec/Δ Ev increases, band gap of optimum solution decreases. With increasing band widths of constituent homopolymers, the optimum solution tends to be dependent on the component with higher band gap.
Evolution of recombination rates in a multi-locus, haploid-selection, symmetric-viability model.
Chasnov, J R; Ye, Felix Xiaofeng
2013-02-01
A fast algorithm for computing multi-locus recombination is extended to include a recombination-modifier locus. This algorithm and a linear stability analysis is used to investigate the evolution of recombination rates in a multi-locus, haploid-selection, symmetric-viability model for which stable equilibria have recently been determined. When the starting equilibrium is symmetric with two selected loci, we show analytically that modifier alleles that reduce recombination always invade. When the starting equilibrium is monomorphic, and there is a fixed nonzero recombination rate between the modifier locus and the selected loci, we determine analytical conditions for which a modifier allele can invade. In particular, we show that a gap exists between the recombination rates of modifiers that can invade and the recombination rate that specifies the lower stability boundary of the monomorphic equilibrium. A numerical investigation shows that a similar gap exists in a weakened form when the starting equilibrium is fully polymorphic but asymmetric. Copyright © 2012 Elsevier Inc. All rights reserved.
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.
2010-01-15
Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, amore » chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.« less
Assessment of BSRN radiation records for the computation of monthly means
NASA Astrophysics Data System (ADS)
Roesch, A.; Wild, M.; Ohmura, A.; Dutton, E. G.; Long, C. N.; Zhang, T.
2011-02-01
The integrity of the Baseline Surface Radiation Network (BSRN) radiation monthly averages are assessed by investigating the impact on monthly means due to the frequency of data gaps caused by missing or discarded high time resolution data. The monthly statistics, especially means, are considered to be important and useful values for climate research, model performance evaluations and for assessing the quality of satellite (time- and space-averaged) data products. The study investigates the spread in different algorithms that have been applied for the computation of monthly means from 1-min values. The paper reveals that the computation of monthly means from 1-min observations distinctly depends on the method utilized to account for the missing data. The intra-method difference generally increases with an increasing fraction of missing data. We found that a substantial fraction of the radiation fluxes observed at BSRN sites is either missing or flagged as questionable. The percentage of missing data is 4.4%, 13.0%, and 6.5% for global radiation, direct shortwave radiation, and downwelling longwave radiation, respectively. Most flagged data in the shortwave are due to nighttime instrumental noise and can reasonably be set to zero after correcting for thermal offsets in the daytime data. The study demonstrates that the handling of flagged data clearly impacts on monthly mean estimates obtained with different methods. We showed that the spread of monthly shortwave fluxes is generally clearly higher than for downwelling longwave radiation. Overall, BSRN observations provide sufficient accuracy and completeness for reliable estimates of monthly mean values. However, the value of future data could be further increased by reducing the frequency of data gaps and the number of outliers. It is shown that two independent methods for accounting for the diurnal and seasonal variations in the missing data permit consistent monthly means to within less than 1 W m-2 in most cases. The authors suggest using a standardized method for the computation of monthly means which addresses diurnal variations in the missing data in order to avoid a mismatch of future published monthly mean radiation fluxes from BSRN. The application of robust statistics would probably lead to less biased results for data records with frequent gaps and/or flagged data and outliers. The currently applied empirical methods should, therefore, be completed by the development of robust methods.
Pseudo-time algorithms for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1986-01-01
A pseudo-time method is introduced to integrate the compressible Navier-Stokes equations to a steady state. This method is a generalization of a method used by Crocco and also by Allen and Cheng. We show that for a simple heat equation that this is just a renormalization of the time. For a convection-diffusion equation the renormalization is dependent only on the viscous terms. We implement the method for the Navier-Stokes equations using a Runge-Kutta type algorithm. This permits the time step to be chosen based on the inviscid model only. We also discuss the use of residual smoothing when viscous terms are present.
Noniterative estimation of a nonlinear parameter
NASA Technical Reports Server (NTRS)
Bergstroem, A.
1973-01-01
An algorithm is described which solves the parameters X = (x1,x2,...,xm) and p in an approximation problem Ax nearly equal to y(p), where the parameter p occurs nonlinearly in y. Instead of linearization methods, which require an approximate value of p to be supplied as a priori information, and which may lead to the finding of local minima, the proposed algorithm finds the global minimum by permitting the use of series expansions of arbitrary order, exploiting an a priori knowledge that the addition of a particular function, corresponding to a new column in A, will not improve the goodness of the approximation.
Testing the Prey-Trap Hypothesis at Two Wildlife Conservancies in Kenya.
Dupuis-Desormeaux, Marc; Davidson, Zeke; Mwololo, Mary; Kisio, Edwin; Taylor, Sam; MacDonald, Suzanne E
2015-01-01
Protecting an endangered and highly poached species can conflict with providing an open and ecologically connected landscape for coexisting species. In Kenya, about half of the black rhino (Diceros bicornis) live in electrically fenced private conservancies. Purpose-built fence-gaps permit some landscape connectivity for elephant while restricting rhino from escaping. We monitored the usage patterns at these gaps by motion-triggered cameras and found high traffic volumes and predictable patterns of prey movement. The prey-trap hypothesis (PTH) proposes that predators exploit this predictable prey movement. We tested the PTH at two semi-porous reserves using two different methods: a spatial analysis and a temporal analysis. Using spatial analysis, we mapped the location of predation events with GPS and looked for concentration of kill sites near the gaps as well as conducting clustering and hot spot analysis to determine areas of statistically significant predation clustering. Using temporal analysis, we examined the time lapse between the passage of prey and predator and searched for evidence of active prey seeking and/or predator avoidance. We found no support for the PTH and conclude that the design of the fence-gaps is well suited to promoting connectivity in these types of conservancies.
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
Computer simulation of a pilot in V/STOL aircraft control loops
NASA Technical Reports Server (NTRS)
Vogt, William G.; Mickle, Marlin H.; Zipf, Mark E.; Kucuk, Senol
1989-01-01
The objective was to develop a computerized adaptive pilot model for the computer model of the research aircraft, the Harrier II AV-8B V/STOL with special emphasis on propulsion control. In fact, two versions of the adaptive pilot are given. The first, simply called the Adaptive Control Model (ACM) of a pilot includes a parameter estimation algorithm for the parameters of the aircraft and an adaption scheme based on the root locus of the poles of the pilot controlled aircraft. The second, called the Optimal Control Model of the pilot (OCM), includes an adaption algorithm and an optimal control algorithm. These computer simulations were developed as a part of the ongoing research program in pilot model simulation supported by NASA Lewis from April 1, 1985 to August 30, 1986 under NASA Grant NAG 3-606 and from September 1, 1986 through November 30, 1988 under NASA Grant NAG 3-729. Once installed, these pilot models permitted the computer simulation of the pilot model to close all of the control loops normally closed by a pilot actually manipulating the control variables. The current version of this has permitted a baseline comparison of various qualitative and quantitative performance indices for propulsion control, the control loops and the work load on the pilot. Actual data for an aircraft flown by a human pilot furnished by NASA was compared to the outputs furnished by the computerized pilot and found to be favorable.
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.
2016-03-01
Cochlear Implants (CIs) are electrode arrays that are surgically inserted into the cochlea. Individual contacts stimulate frequency-mapped nerve endings thus replacing the natural electro-mechanical transduction mechanism. CIs are programmed post-operatively by audiologists but this is currently done using behavioral tests without imaging information that permits relating electrode position to inner ear anatomy. We have recently developed a series of image processing steps that permit the segmentation of the inner ear anatomy and the localization of individual contacts. We have proposed a new programming strategy that uses this information and we have shown in a study with 68 participants that 78% of long term recipients preferred the programming parameters determined with this new strategy. A limiting factor to the large scale evaluation and deployment of our technique is the amount of user interaction still required in some of the steps used in our sequence of image processing algorithms. One such step is the rough registration of an atlas to target volumes prior to the use of automated intensity-based algorithms when the target volumes have very different fields of view and orientations. In this paper we propose a solution to this problem. It relies on a random forest-based approach to automatically localize a series of landmarks. Our results obtained from 83 images with 132 registration tasks show that automatic initialization of an intensity-based algorithm proves to be a reliable technique to replace the manual step.
NASA Astrophysics Data System (ADS)
Chang, W.; Wang, J.; Marohnic, J.; Kotamarthi, V. R.; Moyer, E. J.
2017-12-01
We use a novel rainstorm identification and tracking algorithm (Chang et al 2016) to evaluate the effects of using resolved convection on improving how faithfully high-resolution regional simulations capture precipitation characteristics. The identification and tracking algorithm allocates all precipitation to individual rainstorms, including low-intensity events with complicated features, and allows us to decompose changes or biases in total mean precipitation into their causes: event size, intensity, number, and duration. It allows lower threshold for tracking so captures nearly all rainfall and improves tracking, so that events that are clearly meteorologically related are tracked across lifespans up to days. We evaluate a series of dynamically downscaled simulations of the summertime United States at 12 and 4 km under different model configurations, and find that resolved convection offers the largest gains in reducing biases in precipitation characteristics, especially in event size. Simulations with parametrized convection produce event sizes 80-220% too large in extent; with resolved convection the bias is reduced to 30%. The identification and tracking algorithm also allows us to demonstrate that the diurnal cycle in rainfall stems not from temporal variation in the production of new events but from diurnal fluctuations in rainfall from existing events. We show further hat model errors in the diurnal cycle biases are best represented as additive offsets that differ by time of day, and again that convection-permitting simulations are most efficient in reducing these additive biases.
Fringes, Stefan; Holzner, Felix
2018-01-01
The behavior of nanoparticles under nanofluidic confinement depends strongly on their distance to the confining walls; however, a measurement in which the gap distance is varied is challenging. Here, we present a versatile setup for investigating the behavior of nanoparticles as a function of the gap distance, which is controlled to the nanometer. The setup is designed as an open system that operates with a small amount of dispersion of ≈20 μL, permits the use of coated and patterned samples and allows high-numerical-aperture microscopy access. Using the tool, we measure the vertical position (termed height) and the lateral diffusion of 60 nm, charged, Au nanospheres as a function of confinement between a glass surface and a polymer surface. Interferometric scattering detection provides an effective particle illumination time of less than 30 μs, which results in lateral and vertical position detection accuracy ≈10 nm for diffusing particles. We found the height of the particles to be consistently above that of the gap center, corresponding to a higher charge on the polymer substrate. In terms of diffusion, we found a strong monotonic decay of the diffusion constant with decreasing gap distance. This result cannot be explained by hydrodynamic effects, including the asymmetric vertical position of the particles in the gap. Instead we attribute it to an electroviscous effect. For strong confinement of less than 120 nm gap distance, we detect the onset of subdiffusion, which can be correlated to the motion of the particles along high-gap-distance paths. PMID:29441273
Energy shadowing correction of ultrasonic pulse-echo records by digital signal processing
NASA Technical Reports Server (NTRS)
Kishonio, D.; Heyman, J. S.
1985-01-01
A numerical algorithm is described that enables the correction of energy shadowing during the ultrasonic testing of bulk materials. In the conventional method, an ultrasonic transducer transmits sound waves into a material that is immersed in water so that discontinuities such as defects can be revealed when the waves are reflected and then detected and displayed graphically. Since a defect that lies behind another defect is shadowed in that it receives less energy, the conventional method has a major drawback. The algorithm normalizes the energy of the incoming wave by measuring the energy of the waves reflected off the water/air interface. The algorithm is fast and simple enough to be adopted for real time applications in industry. Images of material defects with the shadowing corrections permit more quantitative interpretation of the material state.
An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems
NASA Astrophysics Data System (ADS)
Hieb, Jeffrey; Graham, James; Guan, Jian
This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.
NASA Technical Reports Server (NTRS)
Charlesworth, Arthur
1990-01-01
The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.
Autonomous proximity operations using machine vision for trajectory control and pose estimation
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Sternberg, Stanley R.
1991-01-01
A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.
FFT applications to plane-polar near-field antenna measurements
NASA Technical Reports Server (NTRS)
Gatti, Mark S.; Rahmat-Samii, Yahya
1988-01-01
The four-point bivariate Lagrange interpolation algorithm was applied to near-field antenna data measured in a plane-polar facility. The results were sufficiently accurate to permit the use of the FFT (fast Fourier transform) algorithm to calculate the far-field patterns of the antenna. Good agreement was obtained between the far-field patterns as calculated by the Jacobi-Bessel and the FFT algorithms. The significant advantage in using the FFT is in the calculation of the principal plane cuts, which may be made very quickly. Also, the application of the FFT algorithm directly to the near-field data was used to perform surface holographic diagnosis of a reflector antenna. The effects due to the focusing of the emergent beam from the reflector, as well as the effects of the information in the wide-angle regions, are shown. The use of the plane-polar near-field antenna test range has therfore been expanded to include these useful FFT applications.
Attitude identification for SCOLE using two infrared cameras
NASA Technical Reports Server (NTRS)
Shenhar, Joram
1991-01-01
An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.
An Aircraft Separation Algorithm with Feedback and Perturbation
NASA Technical Reports Server (NTRS)
White, Allan L.
2010-01-01
A separation algorithm is a set of rules that tell aircraft how to maneuver in order to maintain a minimum distance between them. This paper investigates demonstrating that separation algorithms satisfy the FAA requirement for the occurrence of incidents by means of simulation. Any demonstration that a separation algorithm, or any other aspect of flight, satisfies the FAA requirement is a challenge because of the stringent nature of the requirement and the complexity of airspace operations. The paper begins with a probability and statistical analysis of both the FAA requirement and demonstrating meeting it by a Monte Carlo approach. It considers the geometry of maintaining separation when one plane must change its flight path. It then develops a simple feedback control law that guides the planes on their paths. The presence of feedback control permits the introduction of perturbations, and the stochastic nature of the chosen perturbation is examined. The simulation program is described. This paper is an early effort in the realistic demonstration of a stringent requirement. Much remains to be done.
A Two-Dimensional Linear Bicharacteristic FDTD Method
NASA Technical Reports Server (NTRS)
Beggs, John H.
2002-01-01
The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics. The LBS has previously been extended to treat lossy materials for one-dimensional problems. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to include the Perfectly Matched Layer boundary condition with no added storage or complexity. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional free-space electromagnetic propagation and scattering problems. This paper extends the LBS to the two-dimensional case. Results are presented for point source radiation problems, and the FDTD algorithm is chosen as a convenient reference for comparison.
NASA Astrophysics Data System (ADS)
Gascoin, S.; Grizonnet, M.; Baba, W. M.; Hagolle, O.; Fayad, A.; Mermoz, S.; Kinnard, C.; Fatima, K.; Jarlan, L.; Hanich, L.
2017-12-01
Current spaceborne sensors do not allow retrieving the snow water equivalent in mountain regions, "the most important unsolved problem in snow hydrology" (Dozier, 2016). While the NASA is operating an airborne mission to survey the SWE in the western USA, elsewhere, however, snow scientists and water managers do not have access to routine SWE measurements at the scale of a mountain range. In this presentation we suggest that the advent of the Copernicus Earth Observation programme opens new perspectives to address this issue in mountain regions worldwide. The Sentinel-2 mission will provide global-scale multispectral observations at 20 m resolution every 5-days (cloud permitting). The Sentinel-1 mission is already imaging the global land surface with a C-band radar at 10 m resolution every 6 days. These observations are unprecedented in terms of spatial and temporal resolution. However, the nature of the observation (radiometry, wavelength) is in the continuity of previous and ongoing missions. As a result, it is relatively straightforward to re-use algorithms that were developed by the remote sensing community over the last decades. For instance, Sentinel-2 data can be used to derive maps of the snow cover extent from the normalized difference snow index, which was initially proposed for Landsat. In addition, the 5-days repeat cycle allows the application of gap-filling algorithms, which were developed for MODIS based on the temporal dimension. The Sentinel-1 data can be used to detect the wet snow cover and track melting areas as proposed for ERS in the early 1990's. Eventually, we show an example where Sentinel-2-like data improved the simulation of the SWE in the data-scarce region of the High Atlas in Morocco through assimilation in a distributed snowpack model. We encourage snow scientists to embrace Sentinel-1 and Sentinel-2 data to enhance our knowledge on the snow cover dynamics in mountain regions.
Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.
Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence
2012-08-29
Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2) and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different) biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be translated in a set of ordinary differential equations on probability distributions. We developed a C++ software, MaBoSS, that is able to simulate such a system by applying Kinetic Monte-Carlo (or Gillespie algorithm) on the Boolean state space. This software, parallelized and optimized, computes the temporal evolution of probability distributions and estimates stationary distributions. Applications of the Boolean Kinetic Monte-Carlo are demonstrated for three qualitative models: a toy model, a published model of p53/Mdm2 interaction and a published model of the mammalian cell cycle. Our approach allows to describe kinetic phenomena which were difficult to handle in the original models. In particular, transient effects are represented by time dependent probability distributions, interpretable in terms of cell populations.
A structural and functional comparison of gap junction channels composed of connexins and innexins
Williams, Jamal B.
2016-01-01
ABSTRACT Methods such as electron microscopy and electrophysiology led to the understanding that gap junctions were dense arrays of channels connecting the intracellular environments within almost all animal tissues. The characteristics of gap junctions were remarkably similar in preparations from phylogenetically diverse animals such as cnidarians and chordates. Although few studies directly compared them, minor differences were noted between gap junctions of vertebrates and invertebrates. For instance, a slightly wider gap was noted between cells of invertebrates and the spacing between invertebrate channels was generally greater. Connexins were identified as the structural component of vertebrate junctions in the 1980s and innexins as the structural component of pre‐chordate junctions in the 1990s. Despite a lack of similarity in gene sequence, connexins and innexins are remarkably similar. Innexins and connexins have the same membrane topology and form intercellular channels that play a variety of tissue‐ and temporally specific roles. Both protein types oligomerize to form large aqueous channels that allow the passage of ions and small metabolites and are regulated by factors such as pH, calcium, and voltage. Much more is currently known about the structure, function, and structure–function relationships of connexins. However, the innexin field is expanding. Greater knowledge of innexin channels will permit more detailed comparisons with their connexin‐based counterparts, and provide insight into the ubiquitous yet specific roles of gap junctions. © 2016 Wiley Periodicals, Inc. Develop Neurobiol 77: 522–547, 2017 PMID:27582044
A structural and functional comparison of gap junction channels composed of connexins and innexins.
Skerrett, I Martha; Williams, Jamal B
2017-05-01
Methods such as electron microscopy and electrophysiology led to the understanding that gap junctions were dense arrays of channels connecting the intracellular environments within almost all animal tissues. The characteristics of gap junctions were remarkably similar in preparations from phylogenetically diverse animals such as cnidarians and chordates. Although few studies directly compared them, minor differences were noted between gap junctions of vertebrates and invertebrates. For instance, a slightly wider gap was noted between cells of invertebrates and the spacing between invertebrate channels was generally greater. Connexins were identified as the structural component of vertebrate junctions in the 1980s and innexins as the structural component of pre-chordate junctions in the 1990s. Despite a lack of similarity in gene sequence, connexins and innexins are remarkably similar. Innexins and connexins have the same membrane topology and form intercellular channels that play a variety of tissue- and temporally specific roles. Both protein types oligomerize to form large aqueous channels that allow the passage of ions and small metabolites and are regulated by factors such as pH, calcium, and voltage. Much more is currently known about the structure, function, and structure-function relationships of connexins. However, the innexin field is expanding. Greater knowledge of innexin channels will permit more detailed comparisons with their connexin-based counterparts, and provide insight into the ubiquitous yet specific roles of gap junctions. © 2016 Wiley Periodicals, Inc. Develop Neurobiol 77: 522-547, 2017. © 2016 The Authors Developmental Neurobiology Published by Wiley Periodicals, Inc.
Anderson, Louis W.; Fitzsimmons, William A.
1978-01-01
A pulsed gas laser is constituted by Blumlein circuits wherein space metal plates function both as capacitors and transmission lines coupling high frequency oscillations to a gas filled laser tube. The tube itself is formed by spaced metal side walls which function as connections to the electrodes to provide for a high frequency, high voltage discharge in the tube to cause the gas to lase. Also shown is a spark gap switch having structural features permitting a long life.
Data-Driven Learning of Total and Local Energies in Elemental Boron
NASA Astrophysics Data System (ADS)
Deringer, Volker L.; Pickard, Chris J.; Csányi, Gábor
2018-04-01
The allotropes of boron continue to challenge structural elucidation and solid-state theory. Here we use machine learning combined with random structure searching (RSS) algorithms to systematically construct an interatomic potential for boron. Starting from ensembles of randomized atomic configurations, we use alternating single-point quantum-mechanical energy and force computations, Gaussian approximation potential (GAP) fitting, and GAP-driven RSS to iteratively generate a representation of the element's potential-energy surface. Beyond the total energies of the very different boron allotropes, our model readily provides atom-resolved, local energies and thus deepened insight into the frustrated β -rhombohedral boron structure. Our results open the door for the efficient and automated generation of GAPs, and other machine-learning-based interatomic potentials, and suggest their usefulness as a tool for materials discovery.
Data-Driven Learning of Total and Local Energies in Elemental Boron.
Deringer, Volker L; Pickard, Chris J; Csányi, Gábor
2018-04-13
The allotropes of boron continue to challenge structural elucidation and solid-state theory. Here we use machine learning combined with random structure searching (RSS) algorithms to systematically construct an interatomic potential for boron. Starting from ensembles of randomized atomic configurations, we use alternating single-point quantum-mechanical energy and force computations, Gaussian approximation potential (GAP) fitting, and GAP-driven RSS to iteratively generate a representation of the element's potential-energy surface. Beyond the total energies of the very different boron allotropes, our model readily provides atom-resolved, local energies and thus deepened insight into the frustrated β-rhombohedral boron structure. Our results open the door for the efficient and automated generation of GAPs, and other machine-learning-based interatomic potentials, and suggest their usefulness as a tool for materials discovery.
Spatial Data Structures for Robotic Vehicle Route Planning
1988-12-01
goal will be realized in an intelligent Spatial Data Structure Development System (SDSDS) intended for use by Terrain Analysis applications...from the user the details of representation and to permit the infrastructure itself to decide which representations will be most efficient or effective ...to intelligently predict performance of algorithmic sequences and thereby optimize the application (within the accuracy of the prediction models). The
An efficient, explicit finite-rate algorithm to compute flows in chemical nonequilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1989-01-01
An explicit finite-rate code was developed to compute hypersonic viscous chemically reacting flows about three-dimensional bodies. Equations describing the finite-rate chemical reactions were fully coupled to the gas dynamic equations using a new coupling technique. The new technique maintains stability in the explicit finite-rate formulation while permitting relatively large global time steps.
NASA Astrophysics Data System (ADS)
Ji, Liang-Bo; Chen, Fang
2017-07-01
Numerical simulation and intelligent optimization technology were adopted for rolling and extrusion of zincked sheet. By response surface methodology (RSM), genetic algorithm (GA) and data processing technology, an efficient optimization of process parameters for rolling of zincked sheet was investigated. The influence trend of roller gap, rolling speed and friction factor effects on reduction rate and plate shortening rate were analyzed firstly. Then a predictive response surface model for comprehensive quality index of part was created using RSM. Simulated and predicted values were compared. Through genetic algorithm method, the optimal process parameters for the forming of rolling were solved. They were verified and the optimum process parameters of rolling were obtained. It is feasible and effective.
NASA Astrophysics Data System (ADS)
Hartmann, Alexander K.; Weigt, Martin
2005-10-01
A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.
A 3/2-Approximation Algorithm for Multiple Depot Multiple Traveling Salesman Problem
NASA Astrophysics Data System (ADS)
Xu, Zhou; Rodrigues, Brian
As an important extension of the classical traveling salesman problem (TSP), the multiple depot multiple traveling salesman problem (MDMTSP) is to minimize the total length of a collection of tours for multiple vehicles to serve all the customers, where each vehicle must start or stay at its distinct depot. Due to the gap between the existing best approximation ratios for the TSP and for the MDMTSP in literature, which are 3/2 and 2, respectively, it is an open question whether or not a 3/2-approximation algorithm exists for the MDMTSP. We have partially addressed this question by developing a 3/2-approximation algorithm, which runs in polynomial time when the number of depots is a constant.
Electrical tuning of three-dimensional photonic crystals using polymer dispersed liquid crystals
NASA Astrophysics Data System (ADS)
McPhail, Dennis; Straub, Martin; Gu, Min
2005-01-01
Electrically tunable three-dimensional photonic crystals with a tunable wavelength range of over 70nm of stop gaps between 3 and 4μm have been generated in a liquid crystal-polymer composite. The photonic crystals were fabricated by femtosecond-laser direct writing of void channels in an inverse woodpile configuration with 20 layers providing an extinction of infrared light transmission of 70% in the stacking direction. Stable structures could be manufactured up to a liquid crystal concentration of 24%. Applying a direct voltage of several hundred volts in the stacking direction of the photonic crystal changes the alignment of the liquid crystal directors and hence the average refractive index of the structure. This mechanism permits the direct tuning of the photonic stop gap.
Lining seam elimination algorithm and surface crack detection in concrete tunnel lining
NASA Astrophysics Data System (ADS)
Qu, Zhong; Bai, Ling; An, Shi-Quan; Ju, Fang-Rong; Liu, Ling
2016-11-01
Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.
Zomer, Ella; Osborn, David; Nazareth, Irwin; Blackburn, Ruth; Burton, Alexandra; Hardoon, Sarah; Holt, Richard Ian Gregory; King, Michael; Marston, Louise; Morris, Stephen; Omar, Rumana; Petersen, Irene; Walters, Kate; Hunter, Rachael Maree
2017-09-05
To determine the cost-effectiveness of two bespoke severe mental illness (SMI)-specific risk algorithms compared with standard risk algorithms for primary cardiovascular disease (CVD) prevention in those with SMI. Primary care setting in the UK. The analysis was from the National Health Service perspective. 1000 individuals with SMI from The Health Improvement Network Database, aged 30-74 years and without existing CVD, populated the model. Four cardiovascular risk algorithms were assessed: (1) general population lipid, (2) general population body mass index (BMI), (3) SMI-specific lipid and (4) SMI-specific BMI, compared against no algorithm. At baseline, each cardiovascular risk algorithm was applied and those considered high risk ( > 10%) were assumed to be prescribed statin therapy while others received usual care. Quality-adjusted life years (QALYs) and costs were accrued for each algorithm including no algorithm, and cost-effectiveness was calculated using the net monetary benefit (NMB) approach. Deterministic and probabilistic sensitivity analyses were performed to test assumptions made and uncertainty around parameter estimates. The SMI-specific BMI algorithm had the highest NMB resulting in 15 additional QALYs and a cost saving of approximately £53 000 per 1000 patients with SMI over 10 years, followed by the general population lipid algorithm (13 additional QALYs and a cost saving of £46 000). The general population lipid and SMI-specific BMI algorithms performed equally well. The ease and acceptability of use of an SMI-specific BMI algorithm (blood tests not required) makes it an attractive algorithm to implement in clinical settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Lower bound on the time complexity of local adiabatic evolution
NASA Astrophysics Data System (ADS)
Chen, Zhenghao; Koh, Pang Wei; Zhao, Yan
2006-11-01
The adiabatic theorem of quantum physics has been, in recent times, utilized in the design of local search quantum algorithms, and has been proven to be equivalent to standard quantum computation, that is, the use of unitary operators [D. Aharonov in Proceedings of the 45th Annual Symposium on the Foundations of Computer Science, 2004, Rome, Italy (IEEE Computer Society Press, New York, 2004), pp. 42-51]. Hence, the study of the time complexity of adiabatic evolution algorithms gives insight into the computational power of quantum algorithms. In this paper, we present two different approaches of evaluating the time complexity for local adiabatic evolution using time-independent parameters, thus providing effective tests (not requiring the evaluation of the entire time-dependent gap function) for the time complexity of newly developed algorithms. We further illustrate our tests by displaying results from the numerical simulation of some problems, viz. specially modified instances of the Hamming weight problem.
A simplified analytical random walk model for proton dose calculation
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.
Read-across is a popular data gap filling technique within category and analogue approaches for regulatory purposes. Acceptance of read-across remains an ongoing challenge with several efforts underway for identifying and addressing uncertainties. Here we demonstrate an algorithm...
Implications of the Value of Hydrologic Information to Reservoir Operations--Learning from the Past
ERIC Educational Resources Information Center
Hejazi, Mohamad Issa
2009-01-01
Closing the gap between theoretical reservoir operation and the real-world implementation remains a challenge in contemporary reservoir operations. Past research has focused on optimization algorithms and establishing optimal policies for reservoir operations. In this research, we attempt to understand operators' release decisions by investigating…
An automatic method to detect and track the glottal gap from high speed videoendoscopic images.
Andrade-Miranda, Gustavo; Godino-Llorente, Juan I; Moro-Velázquez, Laureano; Gómez-García, Jorge Andrés
2015-10-29
The image-based analysis of the vocal folds vibration plays an important role in the diagnosis of voice disorders. The analysis is based not only on the direct observation of the video sequences, but also in an objective characterization of the phonation process by means of features extracted from the recorded images. However, such analysis is based on a previous accurate identification of the glottal gap, which is the most challenging step for a further automatic assessment of the vocal folds vibration. In this work, a complete framework to automatically segment and track the glottal area (or glottal gap) is proposed. The algorithm identifies a region of interest that is adapted along time, and combine active contours and watershed transform for the final delineation of the glottis and also an automatic procedure for synthesize different videokymograms is proposed. Thanks to the ROI implementation, our technique is robust to the camera shifting and also the objective test proved the effectiveness and performance of the approach in the most challenging scenarios that it is when exist an inappropriate closure of the vocal folds. The novelties of the proposed algorithm relies on the used of temporal information for identify an adaptive ROI and the use of watershed merging combined with active contours for the glottis delimitation. Additionally, an automatic procedure for synthesize multiline VKG by the identification of the glottal main axis is developed.
An algebraic algorithm for nonuniformity correction in focal-plane arrays.
Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C
2002-09-01
A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.
Mixed-initiative control of intelligent systems
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1987-01-01
Mixed-initiative user interfaces provide a means by which a human operator and an intelligent system may collectively share the task of deciding what to do next. Such interfaces are important to the effective utilization of real-time expert systems as assistants in the execution of critical tasks. Presented here is the Incremental Inference algorithm, a symbolic reasoning mechanism based on propositional logic and suited to the construction of mixed-initiative interfaces. The algorithm is similar in some respects to the Truth Maintenance System, but replaces the notion of 'justifications' with a notion of recency, allowing newer values to override older values yet permitting various interested parties to refresh these values as they become older and thus more vulnerable to change. A simple example is given of the use of the Incremental Inference algorithm plus an overview of the integration of this mechanism within the SPECTRUM expert system for geological interpretation of imaging spectrometer data.
Extreme sub-threshold swing in tunnelling relays
NASA Astrophysics Data System (ADS)
AbdelGhany, M.; Szkopek, T.
2014-01-01
We propose and analyze the theory of the tunnelling relay, a nanoscale active device in which tunnelling current is modulated by electromechanical actuation of a suspended membrane above a fixed electrode. The tunnelling current is modulated exponentially with vacuum gap length, permitting an extreme sub-threshold swing of ˜10 mV/decade breaking the thermionic limit. The predicted performance suggests that a significant reduction in dynamic energy consumption over conventional field effect transistors is physically achievable.
Saturation: An efficient iteration strategy for symbolic state-space generation
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Luettgen, Gerald; Siminiceanu, Radu; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
This paper presents a novel algorithm for generating state spaces of asynchronous systems using Multi-valued Decision Diagrams. In contrast to related work, the next-state function of a system is not encoded as a single Boolean function, but as cross-products of integer functions. This permits the application of various iteration strategies to build a system's state space. In particular, this paper introduces a new elegant strategy, called saturation, and implements it in the tool SMART. On top of usually performing several orders of magnitude faster than existing BDD-based state-space generators, the algorithm's required peak memory is often close to the nal memory needed for storing the overall state spaces.
Path planning algorithms for assembly sequence planning. [in robot kinematics
NASA Technical Reports Server (NTRS)
Krishnan, S. S.; Sanderson, Arthur C.
1991-01-01
Planning for manipulation in complex environments often requires reasoning about the geometric and mechanical constraints which are posed by the task. In planning assembly operations, the automatic generation of operations sequences depends on the geometric feasibility of paths which permit parts to be joined into subassemblies. Feasible locations and collision-free paths must be present for part motions, robot and grasping motions, and fixtures. This paper describes an approach to reasoning about the feasibility of straight-line paths among three-dimensional polyhedral parts using an algebra of polyhedral cones. A second method recasts the feasibility conditions as constraints in a nonlinear optimization framework. Both algorithms have been implemented and results are presented.
Underwater video enhancement using multi-camera super-resolution
NASA Astrophysics Data System (ADS)
Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.
2017-12-01
Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.
Optimal space communications techniques. [discussion of video signals and delta modulation
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1974-01-01
The encoding of video signals using the Song Adaptive Delta Modulator (Song ADM) is discussed. The video signals are characterized as a sequence of pulses having arbitrary height and width. Although the ADM is suited to tracking signals having fast rise times, it was found that the DM algorithm (which permits an exponential rise for estimating an input step) results in a large overshoot and an underdamped response to the step. An overshoot suppression algorithm which significantly reduces the ringing while not affecting the rise time is presented along with formuli for the rise time and the settling time. Channel errors and their effect on the DM encoded bit stream were investigated.
NASA Astrophysics Data System (ADS)
Boley, Aaron C.; Durisen, Richard H.; Nordlund, Åke; Lord, Jesse
2007-08-01
Recent three-dimensional radiative hydrodynamics simulations of protoplanetary disks report disparate disk behaviors, and these differences involve the importance of convection to disk cooling, the dependence of disk cooling on metallicity, and the stability of disks against fragmentation and clump formation. To guarantee trustworthy results, a radiative physics algorithm must demonstrate the capability to handle both the high and low optical depth regimes. We develop a test suite that can be used to demonstrate an algorithm's ability to relax to known analytic flux and temperature distributions, to follow a contracting slab, and to inhibit or permit convection appropriately. We then show that the radiative algorithm employed by Mejía and Boley et al. and the algorithm employed by Cai et al. pass these tests with reasonable accuracy. In addition, we discuss a new algorithm that couples flux-limited diffusion with vertical rays, we apply the test suite, and we discuss the results of evolving the Boley et al. disk with this new routine. Although the outcome is significantly different in detail with the new algorithm, we obtain the same qualitative answers. Our disk does not cool fast due to convection, and it is stable to fragmentation. We find an effective α~10-2. In addition, transport is dominated by low-order modes.
A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis
NASA Technical Reports Server (NTRS)
Reichert, Bruce A.; Wendt, Bruce J.
1994-01-01
A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.
Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I
2017-08-15
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
The Molecular Basis of Rectal Cancer
Shiller, Michelle; Boostrom, Sarah
2015-01-01
The majority of rectal carcinomas are sporadic in nature, and relevant testing for driver mutations to guide therapy is important. A thorough family history is necessary and helpful in elucidating a potential hereditary predilection for a patient's carcinoma. The adequate diagnosis of a heritable tendency toward colorectal carcinoma alters the management of a patient disease and permits the implementation of various surveillance algorithms as preventive measures. PMID:25733974
NASA Astrophysics Data System (ADS)
Kulikova, N. V.; Chepurova, V. M.
2009-10-01
So far we investigated the nonperturbation dynamics of meteoroid complexes. The numerical integration of the differential equations of motion in the N-body problem by the Everhart algorithm (N=2-6) and introduction of the intermediate hyperbolic orbits build on the base of the generalized problem of two fixed centers permit to take into account some gravitational perturbations.
A hybrid incremental projection method for thermal-hydraulics applications
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong
2016-07-01
A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.
A hybrid incremental projection method for thermal-hydraulics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
A hybrid incremental projection method for thermal-hydraulics applications
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; ...
2016-07-01
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
Complexity of possibly gapped histogram and analysis of histogram.
Fushing, Hsieh; Roy, Tania
2018-02-01
We demonstrate that gaps and distributional patterns embedded within real-valued measurements are inseparable biological and mechanistic information contents of the system. Such patterns are discovered through data-driven possibly gapped histogram, which further leads to the geometry-based analysis of histogram (ANOHT). Constructing a possibly gapped histogram is a complex problem of statistical mechanics due to the ensemble of candidate histograms being captured by a two-layer Ising model. This construction is also a distinctive problem of Information Theory from the perspective of data compression via uniformity. By defining a Hamiltonian (or energy) as a sum of total coding lengths of boundaries and total decoding errors within bins, this issue of computing the minimum energy macroscopic states is surprisingly resolved by applying the hierarchical clustering algorithm. Thus, a possibly gapped histogram corresponds to a macro-state. And then the first phase of ANOHT is developed for simultaneous comparison of multiple treatments, while the second phase of ANOHT is developed based on classical empirical process theory for a tree-geometry that can check the authenticity of branches of the treatment tree. The well-known Iris data are used to illustrate our technical developments. Also, a large baseball pitching dataset and a heavily right-censored divorce data are analysed to showcase the existential gaps and utilities of ANOHT.
Complexity of possibly gapped histogram and analysis of histogram
Roy, Tania
2018-01-01
We demonstrate that gaps and distributional patterns embedded within real-valued measurements are inseparable biological and mechanistic information contents of the system. Such patterns are discovered through data-driven possibly gapped histogram, which further leads to the geometry-based analysis of histogram (ANOHT). Constructing a possibly gapped histogram is a complex problem of statistical mechanics due to the ensemble of candidate histograms being captured by a two-layer Ising model. This construction is also a distinctive problem of Information Theory from the perspective of data compression via uniformity. By defining a Hamiltonian (or energy) as a sum of total coding lengths of boundaries and total decoding errors within bins, this issue of computing the minimum energy macroscopic states is surprisingly resolved by applying the hierarchical clustering algorithm. Thus, a possibly gapped histogram corresponds to a macro-state. And then the first phase of ANOHT is developed for simultaneous comparison of multiple treatments, while the second phase of ANOHT is developed based on classical empirical process theory for a tree-geometry that can check the authenticity of branches of the treatment tree. The well-known Iris data are used to illustrate our technical developments. Also, a large baseball pitching dataset and a heavily right-censored divorce data are analysed to showcase the existential gaps and utilities of ANOHT. PMID:29515829
Complexity of possibly gapped histogram and analysis of histogram
NASA Astrophysics Data System (ADS)
Fushing, Hsieh; Roy, Tania
2018-02-01
We demonstrate that gaps and distributional patterns embedded within real-valued measurements are inseparable biological and mechanistic information contents of the system. Such patterns are discovered through data-driven possibly gapped histogram, which further leads to the geometry-based analysis of histogram (ANOHT). Constructing a possibly gapped histogram is a complex problem of statistical mechanics due to the ensemble of candidate histograms being captured by a two-layer Ising model. This construction is also a distinctive problem of Information Theory from the perspective of data compression via uniformity. By defining a Hamiltonian (or energy) as a sum of total coding lengths of boundaries and total decoding errors within bins, this issue of computing the minimum energy macroscopic states is surprisingly resolved by applying the hierarchical clustering algorithm. Thus, a possibly gapped histogram corresponds to a macro-state. And then the first phase of ANOHT is developed for simultaneous comparison of multiple treatments, while the second phase of ANOHT is developed based on classical empirical process theory for a tree-geometry that can check the authenticity of branches of the treatment tree. The well-known Iris data are used to illustrate our technical developments. Also, a large baseball pitching dataset and a heavily right-censored divorce data are analysed to showcase the existential gaps and utilities of ANOHT.
Direct band gap silicon crystals predicted by an inverse design method
NASA Astrophysics Data System (ADS)
Oh, Young Jun; Lee, In-Ho; Lee, Jooyoung; Kim, Sunghyun; Chang, Kee Joo
2015-03-01
Cubic diamond silicon has an indirect band gap and does not absorb or emit light as efficiently as other semiconductors with direct band gaps. Thus, searching for Si crystals with direct band gaps around 1.3 eV is important to realize efficient thin-film solar cells. In this work, we report various crystalline silicon allotropes with direct and quasi-direct band gaps, which are predicted by the inverse design method which combines a conformation space annealing algorithm for global optimization and first-principles density functional calculations. The predicted allotropes exhibit energies less than 0.3 eV per atom and good lattice matches, compared with the diamond structure. The structural stability is examined by performing finite-temperature ab initio molecular dynamics simulations and calculating the phonon spectra. The absorption spectra are obtained by solving the Bethe-Salpeter equation together with the quasiparticle G0W0 approximation. For several allotropes with the band gaps around 1 eV, photovoltaic efficiencies are comparable to those of best-known photovoltaic absorbers such as CuInSe2. This work is supported by the National Research Foundation of Korea (2005-0093845 and 2008-0061987), Samsung Science and Technology Foundation (SSTF-BA1401-08), KIAS Center for Advanced Computation, and KISTI (KSC-2013-C2-040).
A capacitated vehicle routing problem with order available time in e-commerce industry
NASA Astrophysics Data System (ADS)
Liu, Ling; Li, Kunpeng; Liu, Zhixue
2017-03-01
In this article, a variant of the well-known capacitated vehicle routing problem (CVRP) called the capacitated vehicle routing problem with order available time (CVRPOAT) is considered, which is observed in the operations of the current e-commerce industry. In this problem, the orders are not available for delivery at the beginning of the planning period. CVRPOAT takes all the assumptions of CVRP, except the order available time, which is determined by the precedent order picking and packing stage in the warehouse of the online grocer. The objective is to minimize the sum of vehicle completion times. An efficient tabu search algorithm is presented to tackle the problem. Moreover, a Lagrangian relaxation algorithm is developed to obtain the lower bounds of reasonably sized problems. Based on the test instances derived from benchmark data, the proposed tabu search algorithm is compared with a published related genetic algorithm, as well as the derived lower bounds. Also, the tabu search algorithm is compared with the current operation strategy of the online grocer. Computational results indicate that the gap between the lower bounds and the results of the tabu search algorithm is small and the tabu search algorithm is superior to the genetic algorithm. Moreover, the CVRPOAT formulation together with the tabu search algorithm performs much better than the current operation strategy of the online grocer.
NASA Astrophysics Data System (ADS)
Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi
To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.
An effective approach for gap-filling continental scale remotely sensed time-series
Weiss, Daniel J.; Atkinson, Peter M.; Bhatt, Samir; Mappin, Bonnie; Hay, Simon I.; Gething, Peter W.
2014-01-01
The archives of imagery and modeled data products derived from remote sensing programs with high temporal resolution provide powerful resources for characterizing inter- and intra-annual environmental dynamics. The impressive depth of available time-series from such missions (e.g., MODIS and AVHRR) affords new opportunities for improving data usability by leveraging spatial and temporal information inherent to longitudinal geospatial datasets. In this research we develop an approach for filling gaps in imagery time-series that result primarily from cloud cover, which is particularly problematic in forested equatorial regions. Our approach consists of two, complementary gap-filling algorithms and a variety of run-time options that allow users to balance competing demands of model accuracy and processing time. We applied the gap-filling methodology to MODIS Enhanced Vegetation Index (EVI) and daytime and nighttime Land Surface Temperature (LST) datasets for the African continent for 2000–2012, with a 1 km spatial resolution, and an 8-day temporal resolution. We validated the method by introducing and filling artificial gaps, and then comparing the original data with model predictions. Our approach achieved R2 values above 0.87 even for pixels within 500 km wide introduced gaps. Furthermore, the structure of our approach allows estimation of the error associated with each gap-filled pixel based on the distance to the non-gap pixels used to model its fill value, thus providing a mechanism for including uncertainty associated with the gap-filling process in downstream applications of the resulting datasets. PMID:25642100
Addressing the knowledge gap: sexual violence and harassment in the UK Armed Forces.
Godier, Lauren R; Fossey, M
2017-09-06
Despite media interest in alleged sexual violence and harassment in the UK military, there remains a paucity of UK-based peer-reviewed research in this area. Ministry of Defence and service-specific reports support the suggestion that UK service personnel may be at risk of experiencing sexual harassment. These reports however highlight a reluctance by service personnel to report sexual harassment through official channels. In this article, we discuss the paucity of UK-based research pertaining to the prevalence and impact of sexual harassment in the military, explore potential reasons for this gap in knowledge and outline future directions and priorities for academic research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Adaptive Grid Based Localized Learning for Multidimensional Data
ERIC Educational Resources Information Center
Saini, Sheetal
2012-01-01
Rapid advances in data-rich domains of science, technology, and business has amplified the computational challenges of "Big Data" synthesis necessary to slow the widening gap between the rate at which the data is being collected and analyzed for knowledge. This has led to the renewed need for efficient and accurate algorithms, framework,…
Similarity and Difference in the Behavior of Gases: An Interactive Demonstration
ERIC Educational Resources Information Center
Ashkenazi, Guy
2008-01-01
Previous research has documented a gap in students' understanding of gas behavior between the algorithmic-macroscopic level and the conceptual-microscopic level. A coherent understanding of both levels is needed to appreciate the difference in properties of different gases, which is not manifest in the ideal gas law. A demonstration that…
Peripheral nerve injuries secondary to missiles.
Katzman, B M; Bozentka, D J
1999-05-01
Peripheral nerve injuries secondary to missiles can present some of the most challenging problems faced by hand surgeons. This article reviews the pertinent neural anatomy, injury classifications, and repair techniques. Options in the management of nerve gaps are presented including the use of vascularized nerve grafts. The results are discussed and a treatment algorithm is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelikowsky, James R.
2013-04-01
Work in nanoscience has increased substantially in recent years owing to its potential technological applications and to fundamental scientific interest. A driving force for this activity is to capitalize on new phenomena that occurs at the nanoscale. For example, the physical confinement of electronic states, i.e., quantum confinement, can dramatically alter the electronic and optical properties of matter. A prime example of this occurs for the optical properties of nanoscale crystals such as those composed of elemental silicon. Silicon in the bulk state is optically inactive due to the small size of the optical gap, which can only be accessedmore » by indirect transitions. However, at the nanoscale, this material becomes optically active. The size of the optical gap is increased by confinement and the conservation of crystal momentum ceases to hold, resulting in the viability of indirect transitions. Our work associated with this grant has focused on developing new scalable algorithms for describing the electronic and optical properties of matter at the nanoscale such as nano structures of silicon and related semiconductor properties.« less
Quantum plug n’ play: modular computation in the quantum regime
NASA Astrophysics Data System (ADS)
Thompson, Jayne; Modi, Kavan; Vedral, Vlatko; Gu, Mile
2018-01-01
Classical computation is modular. It exploits plug n’ play architectures which allow us to use pre-fabricated circuits without knowing their construction. This bestows advantages such as allowing parts of the computational process to be outsourced, and permitting individual circuit components to be exchanged and upgraded. Here, we introduce a formal framework to describe modularity in the quantum regime. We demonstrate a ‘no-go’ theorem, stipulating that it is not always possible to make use of quantum circuits without knowing their construction. This has significant consequences for quantum algorithms, forcing the circuit implementation of certain quantum algorithms to be rebuilt almost entirely from scratch after incremental changes in the problem—such as changing the number being factored in Shor’s algorithm. We develop a workaround capable of restoring modularity, and apply it to design a modular version of Shor’s algorithm that exhibits increased versatility and reduced complexity. In doing so we pave the way to a realistic framework whereby ‘quantum chips’ and remote servers can be invoked (or assembled) to implement various parts of a more complex quantum computation.
Particle merging algorithm for PIC codes
NASA Astrophysics Data System (ADS)
Vranic, M.; Grismayer, T.; Martins, J. L.; Fonseca, R. A.; Silva, L. O.
2015-06-01
Particle-in-cell merging algorithms aim to resample dynamically the six-dimensional phase space occupied by particles without distorting substantially the physical description of the system. Whereas various approaches have been proposed in previous works, none of them seemed to be able to conserve fully charge, momentum, energy and their associated distributions. We describe here an alternative algorithm based on the coalescence of N massive or massless particles, considered to be close enough in phase space, into two new macro-particles. The local conservation of charge, momentum and energy are ensured by the resolution of a system of scalar equations. Various simulation comparisons have been carried out with and without the merging algorithm, from classical plasma physics problems to extreme scenarios where quantum electrodynamics is taken into account, showing in addition to the conservation of local quantities, the good reproducibility of the particle distributions. In case where the number of particles ought to increase exponentially in the simulation box, the dynamical merging permits a considerable speedup, and significant memory savings that otherwise would make the simulations impossible to perform.
Clustering by reordering of similarity and Laplacian matrices: Application to galaxy clusters
NASA Astrophysics Data System (ADS)
Mahmoud, E.; Shoukry, A.; Takey, A.
2018-04-01
Similarity metrics, kernels and similarity-based algorithms have gained much attention due to their increasing applications in information retrieval, data mining, pattern recognition and machine learning. Similarity Graphs are often adopted as the underlying representation of similarity matrices and are at the origin of known clustering algorithms such as spectral clustering. Similarity matrices offer the advantage of working in object-object (two-dimensional) space where visualization of clusters similarities is available instead of object-features (multi-dimensional) space. In this paper, sparse ɛ-similarity graphs are constructed and decomposed into strong components using appropriate methods such as Dulmage-Mendelsohn permutation (DMperm) and/or Reverse Cuthill-McKee (RCM) algorithms. The obtained strong components correspond to groups (clusters) in the input (feature) space. Parameter ɛi is estimated locally, at each data point i from a corresponding narrow range of the number of nearest neighbors. Although more advanced clustering techniques are available, our method has the advantages of simplicity, better complexity and direct visualization of the clusters similarities in a two-dimensional space. Also, no prior information about the number of clusters is needed. We conducted our experiments on two and three dimensional, low and high-sized synthetic datasets as well as on an astronomical real-dataset. The results are verified graphically and analyzed using gap statistics over a range of neighbors to verify the robustness of the algorithm and the stability of the results. Combining the proposed algorithm with gap statistics provides a promising tool for solving clustering problems. An astronomical application is conducted for confirming the existence of 45 galaxy clusters around the X-ray positions of galaxy clusters in the redshift range [0.1..0.8]. We re-estimate the photometric redshifts of the identified galaxy clusters and obtain acceptable values compared to published spectroscopic redshifts with a 0.029 standard deviation of their differences.
Improved gap size estimation for scaffolding algorithms.
Sahlin, Kristoffer; Street, Nathaniel; Lundeberg, Joakim; Arvestad, Lars
2012-09-01
One of the important steps of genome assembly is scaffolding, in which contigs are linked using information from read-pairs. Scaffolding provides estimates about the order, relative orientation and distance between contigs. We have found that contig distance estimates are generally strongly biased and based on false assumptions. Since erroneous distance estimates can mislead in subsequent analysis, it is important to provide unbiased estimation of contig distance. In this article, we show that state-of-the-art programs for scaffolding are using an incorrect model of gap size estimation. We discuss why current maximum likelihood estimators are biased and describe what different cases of bias we are facing. Furthermore, we provide a model for the distribution of reads that span a gap and derive the maximum likelihood equation for the gap length. We motivate why this estimate is sound and show empirically that it outperforms gap estimators in popular scaffolding programs. Our results have consequences both for scaffolding software, structural variation detection and for library insert-size estimation as is commonly performed by read aligners. A reference implementation is provided at https://github.com/SciLifeLab/gapest. Supplementary data are availible at Bioinformatics online.
On mass concentrations and magnitude gaps of galaxy systems in the CS82 survey
NASA Astrophysics Data System (ADS)
Vitorelli, André Z.; Cypriano, Eduardo S.; Makler, Martín; Pereira, Maria E. S.; Erben, Thomas; Moraes, Bruno
2018-02-01
Galaxy systems with large magnitude gaps - defined as the difference in magnitude between the central galaxy and the brightest satellite in the central region, such as fossil groups - are claimed to have earlier formation times. In this study, we measure the mass concentration, as an indicator of the formation epoch, of ensembles of galaxy systems divided by redshift and magnitude gaps in the r band. We use cross-correlation weak-lensing measurements with NFW parametric mass profiles to measure masses and concentrations of these ensembles from a catalogue of systems built from the SDSS Coadd by the redMaPPer algorithm. The lensing shear data come from the CFHT Stripe 82 (CS82) survey, and consists of i-band images of the SDSS Stripe 82 region. We find that the stack made up of systems with larger magnitude gaps has a high probability of being more concentrated, in the lowest redshift slice (0.2 < z < 0.4), both when dividing in quartiles (P = 0.98) and tertiles (P = 0.85). These results lend credibility to the claim that systems with large magnitude gaps tend to have been formed early.
Optimizations for optical velocity measurements in narrow gaps
NASA Astrophysics Data System (ADS)
Schlüßler, Raimund; Blechschmidt, Christian; Czarske, Jürgen; Fischer, Andreas
2013-09-01
Measuring the flow velocity in small gaps or near a surface with a nonintrusive optical measurement technique is a challenging measurement task, as disturbing light reflections from the surface appear. However, these measurements are important, e.g., in order to understand and to design the leakage flow in the tip gap between the rotor blade end face and the housing of a turbomachine. Hence, methods to reduce the interfering light power and to correct measurement errors caused by it need to be developed and verified. Different alternatives of minimizing the interfering light power for optical flow measurements in small gaps are presented. By optimizing the beam shape of the applied illumination beam using a numerical diffraction simulation, the interfering light power is reduced by up to a factor of 100. In combination with a decrease of the reflection coefficient of the rotor blade surface, an additional reduction of the interfering light power below the used scattered light power is possible. Furthermore, a correction algorithm to decrease the measurement uncertainty of disturbed measurements is derived. These improvements enable optical three-dimensional three-component flow velocity measurements in submillimeter gaps or near a surface.
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A
2015-02-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.
Noll, Douglas C.; Fessler, Jeffrey A.
2014-01-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Karami, Ebrahim; Shehata, Mohamed S; Smith, Andrew
2018-05-04
Medical research suggests that the anterior-posterior (AP)-diameter of the inferior vena cava (IVC) and its associated temporal variation as imaged by bedside ultrasound is useful in guiding fluid resuscitation of the critically-ill patient. Unfortunately, indistinct edges and gaps in vessel walls are frequently present which impede accurate estimation of the IVC AP-diameter for both human operators and segmentation algorithms. The majority of research involving use of the IVC to guide fluid resuscitation involves manual measurement of the maximum and minimum AP-diameter as it varies over time. This effort proposes using a time-varying circle fitted inside the typically ellipsoid IVC as an efficient, consistent and novel approach to tracking and approximating the AP-diameter even in the context of poor image quality. In this active-circle algorithm, a novel evolution functional is proposed and shown to be a useful tool for ultrasound image processing. The proposed algorithm is compared with an expert manual measurement, and state-of-the-art relevant algorithms. It is shown that the algorithm outperforms other techniques and performs very close to manual measurement. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Thaller, L. H.
1981-01-01
The use of interactive computer graphics is suggested as an aid in battery system development. Mathematical representations of simplistic but fully representative functions of many electrochemical concepts of current practical interest will permit battery level charge and discharge phenomena to be analyzed in a qualitative manner prior to the assembly and testing of actual hardware. This technique is a useful addition to the variety of tools available to the battery system designer as he bridges the gap between interesting single cell life test data and reliable energy storage subsystems.
Method and apparatus for control of a magnetic structure
Challenger, Michael P.; Valla, Arthur S.
1996-06-18
A method and apparatus for independently adjusting the spacing between opposing magnet arrays in charged particle based light sources. Adjustment mechanisms between each of the magnet arrays and the supporting structure allow the gap between the two magnet arrays to be independently adjusted. In addition, spherical bearings in the linkages to the magnet arrays permit the transverse angular orientation of the magnet arrays to also be adjusted. The opposing magnet arrays can be supported above the ground by the structural support.
Needs, Effectiveness, and Gap Assessment for Key A-10C Missions: An Overview of Findings
2016-01-01
weapons, further decreasing capacity. The GPS-guided SDB I ( GBU - 39 ) and the multimode SDB II ( GBU -53) begin to be highly useful in these circumstances...fairly close to friendly forces.8 It is specifically designed for use against moving targets; a Link 16 datalink permits target updates in flight, if...not jammed, and a laser-guidance mode allows it to be guided to specific targets if a JTAC is available to provide laser designation . It also has
How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?
NASA Astrophysics Data System (ADS)
Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.
2013-12-01
The last decade has seen a widespread publication of crater detection algorithms (CDA) with increasing detection performances. The adaptive nature of some of the algorithms [1] has permitting their use in the construction or update of global catalogues for Mars and the Moon. Nevertheless, the smallest craters detected in these situations by CDA have 10 pixels in diameter (or about 2 km in MOC-WA images) [2] or can go down to 16 pixels or 200 m in HRSC imagery [3]. The availability of Martian images with metric (HRSC and CTX) and centimetric (HiRISE) resolutions is permitting to unveil craters not perceived before, thus automated approaches seem a natural way of detecting the myriad of these structures. In this study we present the efforts, based on our previous algorithms [2-3] and new training strategies, to push the automated detection of craters to a dimensional threshold as close as possible to the detail that can be perceived on the images, something that has not been addressed yet in a systematic way. The approach is based on the selection of candidate regions of the images (portions that contain crescent highlight and shadow shapes indicating a possible presence of a crater) using mathematical morphology operators (connected operators of different sizes) and on the extraction of texture features (Haar-like) and classification by Adaboost, into crater and non-crater. This is a supervised approach, meaning that a training phase, in which manually labelled samples are provided, is necessary so the classifier can learn what crater and non-crater structures are. The algorithm is intensively tested in Martian HiRISE images, from different locations on the planet, in order to cover the largest surface types from the geological point view (different ages and crater densities) and also from the imaging or textural perspective (different degrees of smoothness/roughness). The quality of the detections obtained is clearly dependent on the dimension of the craters intended to be detected: the lower this limit is, the higher the false detection rates are. A detailed evaluation is performed with breakdown results by crater dimension and image or surface type, permitting to realize that automated detections in large crater datasets in HiRISE imagery datasets with 25cm/pixel resolution can be successfully done (high correct and low false positive detections) until a crater dimension of about 8-10 m or 32-40 pixels. [1] Martins L, Pina P. Marques JS, Silveira M, 2009, Crater detection by a boosting approach. IEEE Geoscience and Remote Sensing Letters 6: 127-131. [2] Salamuniccar G, Loncaric S, Pina P. Bandeira L., Saraiva J, 2011, MA130301GT catalogue of Martian impact craters and advanced evaluation of crater detection algorithms using diverse topography and image datasets. Planetary and Space Science 59: 111-131. [3] Bandeira L, Ding W, Stepinski T, 2012, Detection of sub-kilometer craters in high resolution planetary images using shape and texture features. Advances in Space Research 49: 64-74.
Evolution and advanced technology. [of Flight Telerobotic Servicer
NASA Technical Reports Server (NTRS)
Ollendorf, Stanford; Pennington, Jack E.; Hansen, Bert, III
1990-01-01
The NASREM architecture with its standard interfaces permits development and evolution of the Flight Telerobotic Servicer to greater autonomy. Technologies in control strategies for an arm with seven DOF, including a safety system containing skin sensors for obstacle avoidance, are being developed. Planning and robotic execution software includes symbolic task planning, world model data bases, and path planning algorithms. Research over the last five years has led to the development of laser scanning and ranging systems, which use coherent semiconductor laser diodes for short range sensing. The possibility of using a robot to autonomously assemble space structures is being investigated. A control framework compatible with NASREM is being developed that allows direct global control of the manipulator. Researchers are developing systems that permit an operator to quickly reconfigure the telerobot to do new tasks safely.
Study of mathematical modeling of communication systems transponders and receivers
NASA Technical Reports Server (NTRS)
Walsh, J. R.
1972-01-01
The modeling of communication receivers is described at both the circuit detail level and at the block level. The largest effort was devoted to developing new models at the block modeling level. The available effort did not permit full development of all of the block modeling concepts envisioned, but idealized blocks were developed for signal sources, a variety of filters, limiters, amplifiers, mixers, and demodulators. These blocks were organized into an operational computer simulation of communications receiver circuits identified as the frequency and time circuit analysis technique (FATCAT). The simulation operates in both the time and frequency domains, and permits output plots or listings of either frequency spectra or time waveforms from any model block. Transfer between domains is handled with a fast Fourier transform algorithm.
Installation of automatic control at experimental breeder reactor II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, H.A.; Booty, W.F.; Chick, D.R.
1985-08-01
The Experimental Breeder Reactor II (EBR-II) has been modified to permit automatic control capability. Necessary mechanical and electrical changes were made on a regular control rod position; motor, gears, and controller were replaced. A digital computer system was installed that has the programming capability for varied power profiles. The modifications permit transient testing at EBR-II. Experiments were run that increased power linearly as much as 4 MW/s (16% of initial power of 25 MW(thermal)/s), held power constant, and decreased power at a rate no slower than the increase rate. Thus the performance of the automatic control algorithm, the mechanical andmore » electrical control equipment, and the qualifications of the driver fuel for future power change experiments were all demonstrated.« less
Sky-wave backscatter - A means for observing our environment at great distances.
NASA Technical Reports Server (NTRS)
Croft, T. A.
1972-01-01
During the last five years, much progress has been made in the understanding of sky-wave backscatter. An explanation of the various interacting phenomena is presented, as is a review of the current state of knowledge reflecting recent advances in observational methods and analytic techniques. New narrow-beam antennas, coupled with signal modulations that permit fine resolution in time delay, are beginning to yield information concerning the character of the scatterers, which now can be separately discerned. These narrow beams also permit study of polarization fading from small regions, and this shows promise as a means for learning the distant sea state. Doppler shifts of a fraction of a hertz on signals of tens of megahertz are separable, permitting isolation of sea returns from ground returns by virtue of the Doppler effect resulting from sea-wave speed; this also suggests a potential sea-monitoring principle. Despite these advances, there is little practical application of sky-wave backscatter as a means of environmental monitoring. This lack is attributed to the large remaining gaps in our understanding of the echoes and our inability to interpret the forms of data that can be acquired with equipment of reasonable cost.
Dura-Bernal, S.; Neymotin, S. A.; Kerr, C. C.; Sivagnanam, S.; Majumdar, A.; Francis, J. T.; Lytton, W. W.
2017-01-01
Biomimetic simulation permits neuroscientists to better understand the complex neuronal dynamics of the brain. Embedding a biomimetic simulation in a closed-loop neuroprosthesis, which can read and write signals from the brain, will permit applications for amelioration of motor, psychiatric, and memory-related brain disorders. Biomimetic neuroprostheses require real-time adaptation to changes in the external environment, thus constituting an example of a dynamic data-driven application system. As model fidelity increases, so does the number of parameters and the complexity of finding appropriate parameter configurations. Instead of adapting synaptic weights via machine learning, we employed major biological learning methods: spike-timing dependent plasticity and reinforcement learning. We optimized the learning metaparameters using evolutionary algorithms, which were implemented in parallel and which used an island model approach to obtain sufficient speed. We employed these methods to train a cortical spiking model to utilize macaque brain activity, indicating a selected target, to drive a virtual musculoskeletal arm with realistic anatomical and biomechanical properties to reach to that target. The optimized system was able to reproduce macaque data from a comparable experimental motor task. These techniques can be used to efficiently tune the parameters of multiscale systems, linking realistic neuronal dynamics to behavior, and thus providing a useful tool for neuroscience and neuroprosthetics. PMID:29200477
Design for dependability: A simulation-based approach. Ph.D. Thesis, 1993
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.
1994-01-01
This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced.
NASA Astrophysics Data System (ADS)
Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles
2017-10-01
RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance = 3%, distance-to-agreement = 3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.
Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles
2017-09-12
RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance = 3%, distance-to-agreement = 3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.
OSLay: optimal syntenic layout of unfinished assemblies.
Richter, Daniel C; Schuster, Stephan C; Huson, Daniel H
2007-07-01
The whole genome shotgun approach to genome sequencing results in a collection of contigs that must be ordered and oriented to facilitate efficient gap closure. We present a new tool OSLay that uses synteny between matching sequences in a target assembly and a reference assembly to layout the contigs (or scaffolds) in the target assembly. The underlying algorithm is based on maximum weight matching. The tool provides an interactive visualization of the computed layout and the result can be imported into the assembly editing tool Consed to support the design of primer pairs for gap closure. To enhance efficiency in the gap closure phase of a genome project it is crucial to know which contigs are adjacent in the target genome. Related genome sequences can be used to layout contigs in an assembly. OSLay is freely available from: http://www-ab.informatik.unituebingen.de/software/oslay.
Zero-temperature quantum annealing bottlenecks in the spin-glass phase.
Knysh, Sergey
2016-08-05
A promising approach to solving hard binary optimization problems is quantum adiabatic annealing in a transverse magnetic field. An instantaneous ground state-initially a symmetric superposition of all possible assignments of N qubits-is closely tracked as it becomes more and more localized near the global minimum of the classical energy. Regions where the energy gap to excited states is small (for instance at the phase transition) are the algorithm's bottlenecks. Here I show how for large problems the complexity becomes dominated by O(log N) bottlenecks inside the spin-glass phase, where the gap scales as a stretched exponential. For smaller N, only the gap at the critical point is relevant, where it scales polynomially, as long as the phase transition is second order. This phenomenon is demonstrated rigorously for the two-pattern Gaussian Hopfield model. Qualitative comparison with the Sherrington-Kirkpatrick model leads to similar conclusions.
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.
2014-01-01
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157
Parameters optimization of laser brazing in crimping butt using Taguchi and BPNN-GA
NASA Astrophysics Data System (ADS)
Rong, Youmin; Zhang, Zhen; Zhang, Guojun; Yue, Chen; Gu, Yafei; Huang, Yu; Wang, Chunming; Shao, Xinyu
2015-04-01
The laser brazing (LB) is widely used in the automotive industry due to the advantages of high speed, small heat affected zone, high quality of welding seam, and low heat input. Welding parameters play a significant role in determining the bead geometry and hence quality of the weld joint. This paper addresses the optimization of the seam shape in LB process with welding crimping butt of 0.8 mm thickness using back propagation neural network (BPNN) and genetic algorithm (GA). A 3-factor, 5-level welding experiment is conducted by Taguchi L25 orthogonal array through the statistical design method. Then, the input parameters are considered here including welding speed, wire speed rate, and gap with 5 levels. The output results are efficient connection length of left side and right side, top width (WT) and bottom width (WB) of the weld bead. The experiment results are embed into the BPNN network to establish relationship between the input and output variables. The predicted results of the BPNN are fed to GA algorithm that optimizes the process parameters subjected to the objectives. Then, the effects of welding speed (WS), wire feed rate (WF), and gap (GAP) on the sum values of bead geometry is discussed. Eventually, the confirmation experiments are carried out to demonstrate the optimal values were effective and reliable. On the whole, the proposed hybrid method, BPNN-GA, can be used to guide the actual work and improve the efficiency and stability of LB process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchal, Rémi; Carbonnière, Philippe; Pouchan, Claude
2015-01-22
The study of atomic clusters has become an increasingly active area of research in the recent years because of the fundamental interest in studying a completely new area that can bridge the gap between atomic and solid state physics. Due to their specific properties, such compounds are of great interest in the field of nanotechnology [1,2]. Here, we would present our GSAM algorithm based on a DFT exploration of the PES to find the low lying isomers of such compounds. This algorithm includes the generation of an intial set of structure from which the most relevant are selected. Moreover, anmore » optimization process, called raking optimization, able to discard step by step all the non physically reasonnable configurations have been implemented to reduce the computational cost of this algorithm. Structural properties of Ga{sub n}Asm clusters will be presented as an illustration of the method.« less
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
NASA Technical Reports Server (NTRS)
Stephens, J. B.
1976-01-01
The National Aeronautics and Space Administration/Marshall Space Flight Center multilayer diffusion algorithms have been specialized for the prediction of the surface impact for the dispersive transport of the exhaust effluents from the launch of a Delta-Thor vehicle. This specialization permits these transport predictions to be made at the launch range in real time so that the effluent monitoring teams can optimize their monitoring grids. Basically, the data reduction routine requires only the meteorology profiles for the thermodynamics and kinematics of the atmosphere as an input. These profiles are graphed along with the resulting exhaust cloud rise history, the centerline concentrations and dosages, and the hydrogen chloride isopleths.
Symmetric quantum fully homomorphic encryption with perfect security
NASA Astrophysics Data System (ADS)
Liang, Min
2013-12-01
Suppose some data have been encrypted, can you compute with the data without decrypting them? This problem has been studied as homomorphic encryption and blind computing. We consider this problem in the context of quantum information processing, and present the definitions of quantum homomorphic encryption (QHE) and quantum fully homomorphic encryption (QFHE). Then, based on quantum one-time pad (QOTP), we construct a symmetric QFHE scheme, where the evaluate algorithm depends on the secret key. This scheme permits any unitary transformation on any -qubit state that has been encrypted. Compared with classical homomorphic encryption, the QFHE scheme has perfect security. Finally, we also construct a QOTP-based symmetric QHE scheme, where the evaluate algorithm is independent of the secret key.
Automatic mission planning algorithms for aerial collection of imaging-specific tasks
NASA Astrophysics Data System (ADS)
Sponagle, Paul; Salvaggio, Carl
2017-05-01
The rapid advancement and availability of small unmanned aircraft systems (sUAS) has led to many novel exploitation tasks utilizing that utilize this unique aerial imagery data. Collection of this unique data requires novel flight planning to accomplish the task at hand. This work describes novel flight planning to better support structure-from-motion missions to minimize occlusions, autonomous and periodic overflight of reflectance calibration panels to permit more efficient and accurate data collection under varying illumination conditions, and the collection of imagery data to study optical properties such as the bidirectional reflectance distribution function without disturbing the target in sensitive or remote areas of interest. These novel mission planning algorithms will provide scientists with additional tools to meet their future data collection needs.
DC servomechanism parameter identification: a Closed Loop Input Error approach.
Garrido, Ruben; Miranda, Roger
2012-01-01
This paper presents a Closed Loop Input Error (CLIE) approach for on-line parametric estimation of a continuous-time model of a DC servomechanism functioning in closed loop. A standard Proportional Derivative (PD) position controller stabilizes the loop without requiring knowledge on the servomechanism parameters. The analysis of the identification algorithm takes into account the control law employed for closing the loop. The model contains four parameters that depend on the servo inertia, viscous, and Coulomb friction as well as on a constant disturbance. Lyapunov stability theory permits assessing boundedness of the signals associated to the identification algorithm. Experiments on a laboratory prototype allows evaluating the performance of the approach. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Spacecraft alignment estimation. [for onboard sensors
NASA Technical Reports Server (NTRS)
Shuster, Malcolm D.; Bierman, Gerald J.
1988-01-01
A numerically well-behaved factorized methodology is developed for estimating spacecraft sensor alignments from prelaunch and inflight data without the need to compute the spacecraft attitude or angular velocity. Such a methodology permits the estimation of sensor alignments (or other biases) in a framework free of unknown dynamical variables. In actual mission implementation such an algorithm is usually better behaved than one that must compute sensor alignments simultaneously with the spacecraft attitude, for example by means of a Kalman filter. In particular, such a methodology is less sensitive to data dropouts of long duration, and the derived measurement used in the attitude-independent algorithm usually makes data checking and editing of outliers much simpler than would be the case in the filter.
Ruusuvuori, Pekka; Aijö, Tarmo; Chowdhury, Sharif; Garmendia-Torres, Cecilia; Selinummi, Jyrki; Birbaumer, Mirko; Dudley, Aimée M; Pelkmans, Lucas; Yli-Harja, Olli
2010-05-13
Several algorithms have been proposed for detecting fluorescently labeled subcellular objects in microscope images. Many of these algorithms have been designed for specific tasks and validated with limited image data. But despite the potential of using extensive comparisons between algorithms to provide useful information to guide method selection and thus more accurate results, relatively few studies have been performed. To better understand algorithm performance under different conditions, we have carried out a comparative study including eleven spot detection or segmentation algorithms from various application fields. We used microscope images from well plate experiments with a human osteosarcoma cell line and frames from image stacks of yeast cells in different focal planes. These experimentally derived images permit a comparison of method performance in realistic situations where the number of objects varies within image set. We also used simulated microscope images in order to compare the methods and validate them against a ground truth reference result. Our study finds major differences in the performance of different algorithms, in terms of both object counts and segmentation accuracies. These results suggest that the selection of detection algorithms for image based screens should be done carefully and take into account different conditions, such as the possibility of acquiring empty images or images with very few spots. Our inclusion of methods that have not been used before in this context broadens the set of available detection methods and compares them against the current state-of-the-art methods for subcellular particle detection.
ERIC Educational Resources Information Center
Roehrig, Gillian; Garrow, Shauna
2007-01-01
Evidence of a gap in student understanding has been well documented in chemistry: the typical student holds an abundance of misconceptions. The current expectation is that educational reform will foster greater student achievement via inquiry teaching within classrooms. Using assessments involving both conceptual and algorithmic knowledge of gas…
Read-across remains a popular data gap filling technique within category and analogue approaches for regulatory purposes. Acceptance of read-across is an ongoing challenge with several efforts underway for identifying and addressing uncertainties. Here we demonstrate an algorithm...
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming
2018-05-01
Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.
Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John
2016-01-01
Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.
Fast linear feature detection using multiple directional non-maximum suppression.
Sun, C; Vallotton, P
2009-05-01
The capacity to detect linear features is central to image analysis, computer vision and pattern recognition and has practical applications in areas such as neurite outgrowth detection, retinal vessel extraction, skin hair removal, plant root analysis and road detection. Linear feature detection often represents the starting point for image segmentation and image interpretation. In this paper, we present a new algorithm for linear feature detection using multiple directional non-maximum suppression with symmetry checking and gap linking. Given its low computational complexity, the algorithm is very fast. We show in several examples that it performs very well in terms of both sensitivity and continuity of detected linear features.
NASA Astrophysics Data System (ADS)
Goldsworthy, Brett
2017-08-01
Ship exhaust emissions need to be allocated accurately in both space and time in order to examine many of the associated impacts, including on air quality and health. Data on ship activity from the Automatic Identification System (AIS) allow ship exhaust emissions to be calculated with fine spatial and temporal resolution. However, there are spatial gaps in the coverage afforded by the coastal network of ground stations that are used to collect the AIS data. This paper focuses on the problem of allocating emissions to the coastal gap regions. Allocating emissions to these regions involves generating interpolated ship tracks that both span the gaps and avoid coming too close to land. In most cases, a simple shortest path or straight line interpolation produces tracks that do not overlap or come too close to land. Where the simple method does not produce acceptable results, vessel tracks are steered around land on shortest available paths using a combination of visibility graphs and Dijkstra's algorithm. A geographical cluster analysis is first used to identify the boundary regions of the data gaps. The properties of the data gaps are summarised in terms of the length, duration and speed of the spanning tracks. The interpolation methods are used to improve the spatial distribution of emissions. It is also shown that emissions in the gap regions can contribute substantially to the total ship exhaust emissions in close proximity to highly populated areas.
NASA Astrophysics Data System (ADS)
Liu, Laqun; Wang, Huihui; Guo, Fan; Zou, Wenkang; Liu, Dagang
2017-04-01
Based on the 3-dimensional Particle-In-Cell (PIC) code CHIPIC3D, with a new circuit boundary algorithm we developed, a conical magnetically insulated transmission line (MITL) with a 1.0-MV linear transformer driver (LTD) is explored numerically. The values of switch jitter time of LTD are critical parameters for the system, which are difficult to be measured experimentally. In this paper, these values are obtained by comparing the PIC results with experimental data of large diode-gap MITL. By decreasing the diode gap, we find that all PIC results agree well with experimental data only if MITL works on self-limited flow no matter how large the diode gap is. However, when the diode gap decreases to a threshold, the self-limited flow would transfer to a load-limited flow. In this situation, PIC results no longer agree with experimental data anymore due to the anode plasma expansion in the diode load. This disagreement is used to estimate the plasma expansion speed.
NASA Technical Reports Server (NTRS)
Hanold, Gregg T.; Hanold, David T.
2010-01-01
This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.
The Development of Layered Photonic Band Gap Structures Using a Micro-Transfer Molding Technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, Kevin Jerome
Photonic band gap (PBG) crystals are periodic dielectric structures that manipulate electromagnetic radiation in a manner similar to semiconductor devices manipulating electrons. Whereas a semiconductor material exhibits an electronic band gap in which electrons cannot exist, similarly, a photonic crystal containing a photonic band gap does not allow the propagation of specific frequencies of electromagnetic radiation. This phenomenon results from the destructive Bragg diffraction interference that a wave propagating at a specific frequency will experience because of the periodic change in dielectric permitivity. This gives rise to a variety of optical applications for improving the efficiency and effectiveness of opto-electronicmore » devices. These applications are reviewed later. Several methods are currently used to fabricate photonic crystals, which are also discussed in detail. This research involves a layer-by-layer micro-transfer molding ({mu}TM) and stacking method to create three-dimensional FCC structures of epoxy or titania. The structures, once reduced significantly in size can be infiltrated with an organic gain media and stacked on a semiconductor to improve the efficiency of an electronically pumped light-emitting diode. Photonic band gap structures have been proven to effectively create a band gap for certain frequencies of electro-magnetic radiation in the microwave and near-infrared ranges. The objective of this research project was originally two-fold: to fabricate a three dimensional (3-D) structure of a size scaled to prohibit electromagnetic propagation within the visible wavelength range, and then to characterize that structure using laser dye emission spectra. As a master mold has not yet been developed for the micro transfer molding technique in the visible range, the research was limited to scaling down the length scale as much as possible with the current available technology and characterizing these structures with other methods.« less
Structural plasticity mediates distinct GAP-dependent GTP hydrolysis mechanisms in Rab33 and Rab5.
Majumdar, Soneya; Acharya, Abhishek; Prakash, Balaji
2017-12-01
The classical GTP hydrolysis mechanism, as seen in Ras, employs a catalytic glutamine provided in cis by the GTPase and an arginine supplied in trans by a GTPase activating protein (GAP). The key idea emergent from a large body of research on small GTPases is that GTPases employ a variety of different hydrolysis mechanisms; evidently, these variations permit diverse rates of GTPase inactivation, crucial for temporal regulation of different biological processes. Recently, we unified these variations and argued that a steric clash between active site residues (corresponding to positions 12 and 61 of Ras) governs whether a GTPase utilizes the cis-Gln or the trans-Gln (from the GAP) for catalysis. As the cis-Gln encounters a steric clash, the Rab GTPases employ the so-called dual finger mechanism where the interacting GAP supplies a trans-Gln for catalysis. Using experimental and computational methods, we demonstrate how the cis-Gln of Rab33 overcomes the steric clash when it is stabilized by a residue in the vicinity. In effect, this demonstrates how both cis-Gln- and trans-Gln-mediated mechanisms could operate in the same GTPase in different contexts, i.e. depending on the GAP that regulates its action. Interestingly, in the case of Rab5, which possesses a higher intrinsic GTP hydrolysis rate, a similar stabilization of the cis-Gln appears to overcome the steric clash. Taken together with the mechanisms seen for Rab1, it is evident that the observed variations in Rab and their GAP partners allow structural plasticity, or in other words, the choice of different catalytic mechanisms. © 2017 Federation of European Biochemical Societies.
Le, Aurora B; Hoboy, Selin; Germain, Anne; Miller, Hal; Thompson, Richard; Herstein, Jocelyn J; Jelden, Katelyn C; Beam, Elizabeth L; Gibbs, Shawn G; Lowe, John J
2018-02-01
The recent Ebola outbreak led to the development of Ebola virus disease (EVD) best practices in clinical settings. However, after the care of EVD patients, proper medical waste management and disposal was identified as a crucial component to containing the virus. Category A waste-contaminated with EVD and other highly infectious pathogens-is strictly regulated by governmental agencies, and led to only several facilities willing to accept the waste. A pilot survey was administered to determine if U.S. medical waste facilities are prepared to handle or transport category A waste, and to determine waste workers' current extent of training to handle highly infectious waste. Sixty-eight percent of survey respondents indicated they had not determined if their facility would accept category A waste. Of those that had acquired a special permit, 67% had yet to modify their permit since the EVD outbreak. This pilot survey underscores gaps in the medical waste industry to handle and respond to category A waste. Furthermore, this study affirms reports a limited number of processing facilities are capable or willing to accept category A waste. Developing the proper management of infectious disease materials is essential to close the gaps identified so that states and governmental entities can act accordingly based on the regulations and guidance developed, and to ensure public safety. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Blast-wave density measurements
NASA Astrophysics Data System (ADS)
Ritzel, D. V.
Applications of a densitometer to obtain time-resolved data on the total density in blast-wave flows are described. A beta-source (promethium-147) is separated by a gap from a scintillator and a photomultiplier tube (PMT). Attenuation of the radiation beam by the passing blast wave is due to the total density in the gap volume during the wave passage. Signal conditioning and filtering methods permit the system to output linearized data. Results are provided from use of the system to monitor blast waves emitted by detonation of a 10.7 m diameter fiberglass sphere containing 609 tons of ammonium nitrate/fuel oil at a 50.6 m height. Blast wave density data are provided for peak overpressure levels of 245, 172 and 70 kPa and distances of 183, 201 and 314 m from ground zero. Data resolution was of high enough quality to encourage efforts to discriminate dust and gasdynamic phenomena within passing blast waves.
NASA Astrophysics Data System (ADS)
Rinott, Shahar; Ribak, Amit; Chashka, Khanan; Randeria, Mohit; Kanigel, Amit
The crossover from Bardeen-Cooper-Schrieffer (BCS) superconductivity to Bose-Einstein condensation (BEC) was never realized in quantum materials. It is difficult to realize because, unlike in ultra cold atoms, one cannot tune the pairing interaction. We realize the BCS-BEC crossover in a nearly compensated semimetal Fe1+ySexTe1-x by tuning the Fermi energy ɛF via chemical doping, which permits us to systematically change Δ /ɛF from 0 . 16 to 0 . 50 , where Δ is the superconducting (SC) gap. We use angle-resolved photoemission spectroscopy to measure the Fermi energy, the SC gap and characteristic changes in the SC state electronic dispersion as the system evolves from a BCS to a BEC regime. Our results raise important questions about the crossover in multi-band superconductors which go beyond those addressed in the context of cold atoms.
The section TiInSe/sub 2/-TiSbSe/sub 2/ of the system Ti-In-Sb-Se
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guseinov, G.D.; Chapanova, L.M.; Mal'sagov, A.U.
1985-09-01
The ternary compounds A /SUP I/ B /SUP III/ C/sub 2/ /SUP VI/ (A /SUP I/ is univalent Ti; B /SUP III/ is Ga or In; and C /SUP VI/ is S, Se or Te) form a class of semiconductors with a large number of different gap widths. The compounds crystallize in the chalcopyrite structure. Solid solutions based on these compounds, which permit varying smoothly the gap width and other physical parameters over wide limits, are of great interest. The authors synthesized the compounds TiInSe/sub 2/ and TiSbSe/sub 2/ from the starting materials Ti-000, In-000, Sb-000 and Se-OSCh-17-4 by directmore » fusion of the components, taken in a stoichiometric ratio, in quartz ampules evacuated to 1.3 X 10/sup -3/ Pa and sealed.« less
NASA Astrophysics Data System (ADS)
Angot, E.; Huang, B.; Levelut, C.; Le Parc, R.; Hermet, P.; Pereira, A. S.; Aquilanti, G.; Frapper, G.; Cambon, O.; Haines, J.
2017-08-01
α -Quartz-type gallium phosphate and representative compositions in the AlP O4-GaP O4 solid solution were studied by x-ray powder diffraction and absorption spectroscopy, Raman scattering, and by first-principles calculations up to pressures of close to 30 GPa. A phase transition to a metastable orthorhombic high-pressure phase along with some of the stable orthorhombic C m c m CrV O4 -type material is found to occur beginning at 9 GPa at 320 ∘C in GaP O4 . In the case of the AlP O4-GaP O4 solid solution at room temperature, only the metastable orthorhombic phase was obtained above 10 GPa. The possible crystal structures of the high-pressure forms of GaP O4 were predicted from first-principles calculations and the evolutionary algorithm USPEX. A predicted orthorhombic structure with a P m n 21 space group with the gallium in sixfold and phosphorus in fourfold coordination was found to be in the best agreement with the combined experimental data from x-ray diffraction and absorption and Raman spectroscopy. This method is found to very powerful to better understand competition between different phase transition pathways at high pressure.
Performance and state-space analyses of systems using Petri nets
NASA Technical Reports Server (NTRS)
Watson, James Francis, III
1992-01-01
The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.
Automated Cervical Screening and Triage, Based on HPV Testing and Computer-Interpreted Cytology.
Yu, Kai; Hyun, Noorie; Fetterman, Barbara; Lorey, Thomas; Raine-Bennett, Tina R; Zhang, Han; Stamps, Robin E; Poitras, Nancy E; Wheeler, William; Befano, Brian; Gage, Julia C; Castle, Philip E; Wentzensen, Nicolas; Schiffman, Mark
2018-04-11
State-of-the-art cervical cancer prevention includes human papillomavirus (HPV) vaccination among adolescents and screening/treatment of cervical precancer (CIN3/AIS and, less strictly, CIN2) among adults. HPV testing provides sensitive detection of precancer but, to reduce overtreatment, secondary "triage" is needed to predict women at highest risk. Those with the highest-risk HPV types or abnormal cytology are commonly referred to colposcopy; however, expert cytology services are critically lacking in many regions. To permit completely automatable cervical screening/triage, we designed and validated a novel triage method, a cytologic risk score algorithm based on computer-scanned liquid-based slide features (FocalPoint, BD, Burlington, NC). We compared it with abnormal cytology in predicting precancer among 1839 women testing HPV positive (HC2, Qiagen, Germantown, MD) in 2010 at Kaiser Permanente Northern California (KPNC). Precancer outcomes were ascertained by record linkage. As additional validation, we compared the algorithm prospectively with cytology results among 243 807 women screened at KPNC (2016-2017). All statistical tests were two-sided. Among HPV-positive women, the algorithm matched the triage performance of abnormal cytology. Combined with HPV16/18/45 typing (Onclarity, BD, Sparks, MD), the automatable strategy referred 91.7% of HPV-positive CIN3/AIS cases to immediate colposcopy while deferring 38.4% of all HPV-positive women to one-year retesting (compared with 89.1% and 37.4%, respectively, for typing and cytology triage). In the 2016-2017 validation, the predicted risk scores strongly correlated with cytology (P < .001). High-quality cervical screening and triage performance is achievable using this completely automated approach. Automated technology could permit extension of high-quality cervical screening/triage coverage to currently underserved regions.
Cloud and Aerosol Retrieval for the 2001 GLAS Satellite Lidar Mission
NASA Technical Reports Server (NTRS)
Hart, William D.; Palm, Stephen P.; Spinhirne, James D.
2000-01-01
The Geoscience Laser Altimeter System (GLAS) is scheduled for launch in July of 2001 aboard the Ice, Cloud and Land Elevation Satellite (ICESAT). In addition to being a precision altimeter for mapping the height of the Earth's icesheets, GLAS will be an atmospheric lidar, sensitive enough to detect gaseous, aerosol, and cloud backscatter signals, at horizontal and vertical resolutions of 175 and 75m, respectively. GLAS will be the first lidar to produce temporally continuous atmospheric backscatter profiles with nearly global coverage (94-degree orbital inclination). With a projected operational lifetime of five years, GLAS will collect approximately six billion lidar return profiles. The large volume of data dictates that operational analysis algorithms, which need to keep pace with the data yield of the instrument, must be efficient. So, we need to evaluate the ability of operational algorithms to detect atmospheric constituents that affect global climate. We have to quantify, in a statistical manner, the accuracy and precision of GLAS cloud and aerosol observations. Our poster presentation will show the results of modeling studies that are designed to reveal the effectiveness and sensitivity of GLAS in detecting various atmospheric cloud and aerosol features. The studies consist of analyzing simulated lidar returns. Simulation cases are constructed either from idealized renditions of atmospheric cloud and aerosol layers or from data obtained by the NASA ER-2 Cloud Lidar System (CLS). The fabricated renditions permit quantitative evaluations of operational algorithms to retrieve cloud and aerosol parameters. The use of observational data permits the evaluations of performance for actual atmospheric conditions. The intended outcome of the presentation is that climatology community will be able to use the results of these studies to evaluate and quantify the impact of GLAS data upon atmospheric modeling efforts.
A unifying framework for rigid multibody dynamics and serial and parallel computational issues
NASA Technical Reports Server (NTRS)
Fijany, Amir; Jain, Abhinandan
1989-01-01
A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.
1989-11-01
considerable promise is a variation of the familiar Lempel - Ziv adaptive data compression scheme that permits a straightforward mapping to hardware...types of data . The UNIX " compress " implementation is based upon Terry Welch’s 1984 variation of the Lempel - Ziv method (LZW). One flaw lies in the fact...or more; it must effec- tively compress all types of data (i.e. the algorithm must be universal); the implementation must be contained within a small
NASA Technical Reports Server (NTRS)
Bailey, David H.; Borwein, Jonathan M.; Borwein, Peter B.; Plouffe, Simon
1996-01-01
This article gives a brief history of the analysis and computation of the mathematical constant Pi=3.14159 ..., including a number of the formulas that have been used to compute Pi through the ages. Recent developments in this area are then discussed in some detail, including the recent computation of Pi to over six billion decimal digits using high-order convergent algorithms, and a newly discovered scheme that permits arbitrary individual hexadecimal digits of Pi to be computed.
Stereovision Imaging in Smart Mobile Phone Using Add on Prisms
NASA Astrophysics Data System (ADS)
Bar-Magen Numhauser, Jonathan; Zalevsky, Zeev
2014-03-01
In this work we present the use of a prism-based add on component installed on top of a smart phone to achieve stereovision capabilities using iPhone mobile operating system. Through these components and the combination of the appropriate application programming interface and mathematical algorithms the obtained results will permit the analysis of possible enhancements for new uses to such system, in a variety of areas including medicine and communications.
Estimation of mating system parameters in plant populations using marker loci with null alleles.
Ross, H A
1986-06-01
An Expectation-Maximization (EM)-algorithm procedure is presented that extends Cheliak et al. (1983) method of maximum-likelihood estimation of mating system parameters of mixed mating system models. The extension permits the estimation of the rate of self-fertilization (s) and allele frequencies (Pi) at loci in outcrossing pollen, at marker loci having recessive null alleles. The algorithm makes use of maternal and filial genotypic arrays obtained by the electrophoretic analysis of cohorts of progeny. The genotypes of maternal plants must be known. Explicit equations are given for cases when the genotype of the maternal gamete inherited by a seed can (gymnosperms) or cannot (angiosperms) be determined. The procedure can accommodate any number of codominant alleles, but only one recessive null allele at each locus. An example, using actual data from Pinus banksiana, is presented to illustrate the application of this EM algorithm to the estimation of mating system parameters using marker loci having both codominant and recessive alleles.
NASA Astrophysics Data System (ADS)
Anderson, D. V.; Koniges, A. E.; Shumaker, D. E.
1988-11-01
Many physical problems require the solution of coupled partial differential equations on three-dimensional domains. When the time scales of interest dictate an implicit discretization of the equations a rather complicated global matrix system needs solution. The exact form of the matrix depends on the choice of spatial grids and on the finite element or finite difference approximations employed. CPDES3 allows each spatial operator to have 7, 15, 19, or 27 point stencils and allows for general couplings between all of the component PDE's and it automatically generates the matrix structures needed to perform the algorithm. The resulting sparse matrix equation is solved by either the preconditioned conjugate gradient (CG) method or by the preconditioned biconjugate gradient (BCG) algorithm. An arbitrary number of component equations are permitted only limited by available memory. In the sub-band representation used, we generate an algorithm that is written compactly in terms of indirect induces which is vectorizable on some of the newer scientific computers.
NASA Astrophysics Data System (ADS)
Anderson, D. V.; Koniges, A. E.; Shumaker, D. E.
1988-11-01
Many physical problems require the solution of coupled partial differential equations on two-dimensional domains. When the time scales of interest dictate an implicit discretization of the equations a rather complicated global matrix system needs solution. The exact form of the matrix depends on the choice of spatial grids and on the finite element or finite difference approximations employed. CPDES2 allows each spatial operator to have 5 or 9 point stencils and allows for general couplings between all of the component PDE's and it automatically generates the matrix structures needed to perform the algorithm. The resulting sparse matrix equation is solved by either the preconditioned conjugate gradient (CG) method or by the preconditioned biconjugate gradient (BCG) algorithm. An arbitrary number of component equations are permitted only limited by available memory. In the sub-band representation used, we generate an algorithm that is written compactly in terms of indirect indices which is vectorizable on some of the newer scientific computers.
Kodiak: An Implementation Framework for Branch and Bound Algorithms
NASA Technical Reports Server (NTRS)
Smith, Andrew P.; Munoz, Cesar A.; Narkawicz, Anthony J.; Markevicius, Mantas
2015-01-01
Recursive branch and bound algorithms are often used to refine and isolate solutions to several classes of global optimization problems. A rigorous computation framework for the solution of systems of equations and inequalities involving nonlinear real arithmetic over hyper-rectangular variable and parameter domains is presented. It is derived from a generic branch and bound algorithm that has been formally verified, and utilizes self-validating enclosure methods, namely interval arithmetic and, for polynomials and rational functions, Bernstein expansion. Since bounds computed by these enclosure methods are sound, this approach may be used reliably in software verification tools. Advantage is taken of the partial derivatives of the constraint functions involved in the system, firstly to reduce the branching factor by the use of bisection heuristics and secondly to permit the computation of bifurcation sets for systems of ordinary differential equations. The associated software development, Kodiak, is presented, along with examples of three different branch and bound problem types it implements.
García Arroyo, Jose Luis; García Zapirain, Begoña
2014-01-01
By means of this study, a detection algorithm for the "pigment network" in dermoscopic images is presented, one of the most relevant indicators in the diagnosis of melanoma. The design of the algorithm consists of two blocks. In the first one, a machine learning process is carried out, allowing the generation of a set of rules which, when applied over the image, permit the construction of a mask with the pixels candidates to be part of the pigment network. In the second block, an analysis of the structures over this mask is carried out, searching for those corresponding to the pigment network and making the diagnosis, whether it has pigment network or not, and also generating the mask corresponding to this pattern, if any. The method was tested against a database of 220 images, obtaining 86% sensitivity and 81.67% specificity, which proves the reliability of the algorithm. © 2013 The Authors. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Duggin, M. J. (Principal Investigator); Piwinski, D.
1982-01-01
The use of NOAA AVHRR data to map and monitor vegetation types and conditions in near real-time can be enhanced by using a portion of each GAC image that is larger than the central 25% now considered. Enlargement of the cloud free image data set can permit development of a series of algorithms for correcting imagery for ground reflectance and for atmospheric scattering anisotropy within certain accuracy limits. Empirical correction algorithms used to normalize digital radiance or VIN data must contain factors for growth stage and for instrument spectral response. While it is not possible to correct for random fluctuations in target radiance, it is possible to estimate the necessary radiance difference between targets in order to provide target discrimination and quantification within predetermined limits of accuracy. A major difficulty lies in the lack of documentation of preprocessing algorithms used on AVHRR digital data.
Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.
Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo
2018-04-01
In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.
Ladapo, Joseph A; Coles, Adrian; Dolor, Rowena J; Mark, Daniel B; Cooper, Lawton; Lee, Kerry L; Goldberg, Jonathan; Shapiro, Michael D; Hoffmann, Udo; Douglas, Pamela S
2017-09-29
To evaluate potential gaps in preventive medical therapy and healthy lifestyle practices among symptomatic patients with suspected coronary artery disease (CAD) seeing primary care physicians and cardiologists and how gaps vary by sociodemographic characteristics and baseline cardiovascular risk. Cross-sectional study assessing potential preventive gaps. 10 003 symptomatic outpatients evaluated by primary care physicians, cardiologists or other specialists for suspected CAD. PROspective Multicenter Imaging Study for Evaluation of Chest Painfrom 2010 to 2014. Primary measures were absence of an antihypertensive, statin or angiotensin-converting enzyme inhibitor/angiotensin receptor blocker for renal protection in patients with hypertension, dyslipidaemia or diabetes, respectively, and being sedentary, smoking or being obese. Preventive treatment gaps affected 14% of patients with hypertension, 36% of patients with dyslipidaemia and 32% of patients with diabetes. Overall, 49% of patients were sedentary, 18% currently smoked and 48% were obese. Women were significantly more likely to not take a statin for dyslipidaemia and to be sedentary. Patients with lower socioeconomic status were also significantly more likely to not take a statin. Compared with Whites, Blacks were significantly more likely to be obese, while Asians were less likely to smoke or be obese. High-risk patients sometimes experienced larger preventive care gaps than low-risk patients. For patients with dyslipidaemia, the presence of a treatment gap was associated with a higher risk of an adverse event (HR 1.35, 95% CI 1.02 to 1.82). Among contemporary, symptomatic patients with suspected CAD, significant gaps exist in preventive care and lifestyle practices, and high-risk patients sometimes had larger gaps. Differences by sex, age, race/ethnicity, socioeconomic status and geography are modest but contribute to disparities and have implications for improving opulation health. For patients with dyslipidaemia, the presence of a treatment gap was associated with a higher risk of an adverse event. Clinical Trials.gov identifier NCT01174550. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Henkel, C.; Klimchitskaya, G. L.; Mostepanenko, V. M.
2018-03-01
We present a formalism based on first principles of quantum electrodynamics at nonzero temperature which permits us to calculate the Casimir-Polder interaction between an atom and a graphene sheet with arbitrary mass gap and chemical potential, including graphene-coated substrates. The free energy and force of the Casimir-Polder interaction are expressed via the polarization tensor of graphene in (2 +1 ) -dimensional space-time in the framework of the Dirac model. The obtained expressions are used to investigate the influence of the chemical potential of graphene on the Casimir-Polder interaction. Computations are performed for an atom of metastable helium interacting with either a freestanding graphene sheet or a graphene-coated substrate made of amorphous silica. It is shown that the impacts of the nonzero chemical potential and the mass gap on the Casimir-Polder interaction are in opposite directions, by increasing and decreasing the magnitudes of the free energy and force, respectively. It turns out, however, that the temperature-dependent part of the Casimir-Polder interaction is decreased by a nonzero chemical potential, whereas the mass gap increases it compared to the case of undoped, gapless graphene. The physical explanation for these effects is provided. Numerical computations of the Casimir-Polder interaction are performed at various temperatures and atom-graphene separations.
A Data System for a Rapid Evaluation Class of Subscale Aerial Vehicle
NASA Technical Reports Server (NTRS)
Hogge, Edward F.; Quach, Cuong C.; Vazquez, Sixto L.; Hill, Boyd L.
2011-01-01
A low cost, rapid evaluation, test aircraft is used to develop and test airframe damage diagnosis algorithms at Langley Research Center as part of NASA's Aviation Safety Program. The remotely operated subscale aircraft is instrumented with sensors to monitor structural response during flight. Data is collected for good and compromised airframe configurations to develop data driven models for diagnosing airframe state. This paper describes the data acquisition system (DAS) of the rapid evaluation test aircraft. A PC/104 form factor DAS was developed to allow use of Matlab, Simulink simulation code in Langley's existing subscale aircraft flight test infrastructure. The small scale of the test aircraft permitted laboratory testing of the actual flight article under controlled conditions. The low cost and modularity of the DAS permitted adaptation to various flight experiment requirements.
Acceleration of the Smith-Waterman algorithm using single and multiple graphics processors
NASA Astrophysics Data System (ADS)
Khajeh-Saeed, Ali; Poole, Stephen; Blair Perot, J.
2010-06-01
Finding regions of similarity between two very long data streams is a computationally intensive problem referred to as sequence alignment. Alignment algorithms must allow for imperfect sequence matching with different starting locations and some gaps and errors between the two data sequences. Perhaps the most well known application of sequence matching is the testing of DNA or protein sequences against genome databases. The Smith-Waterman algorithm is a method for precisely characterizing how well two sequences can be aligned and for determining the optimal alignment of those two sequences. Like many applications in computational science, the Smith-Waterman algorithm is constrained by the memory access speed and can be accelerated significantly by using graphics processors (GPUs) as the compute engine. In this work we show that effective use of the GPU requires a novel reformulation of the Smith-Waterman algorithm. The performance of this new version of the algorithm is demonstrated using the SSCA#1 (Bioinformatics) benchmark running on one GPU and on up to four GPUs executing in parallel. The results indicate that for large problems a single GPU is up to 45 times faster than a CPU for this application, and the parallel implementation shows linear speed up on up to 4 GPUs.
Vasilopoulou, Maria; Douvas, Antonios M; Georgiadou, Dimitra G; Palilis, Leonidas C; Kennou, Stella; Sygellou, Labrini; Soultati, Anastasia; Kostis, Ioannis; Papadimitropoulos, Giorgos; Davazoglou, Dimitris; Argitis, Panagiotis
2012-10-03
Molybdenum oxide is used as a low-resistance anode interfacial layer in applications such as organic light emitting diodes and organic photovoltaics. However, little is known about the correlation between its stoichiometry and electronic properties, such as work function and occupied gap states. In addition, despite the fact that the knowledge of the exact oxide stoichiometry is of paramount importance, few studies have appeared in the literature discussing how this stoichiometry can be controlled to permit the desirable modification of the oxide's electronic structure. This work aims to investigate the beneficial role of hydrogenation (the incorporation of hydrogen within the oxide lattice) versus oxygen vacancy formation in tuning the electronic structure of molybdenum oxides while maintaining their high work function. A large improvement in the operational characteristics of both polymer light emitting devices and bulk heterojunction solar cells incorporating hydrogenated Mo oxides as hole injection/extraction layers was achieved as a result of favorable energy level alignment at the metal oxide/organic interface and enhanced charge transport through the formation of a large density of gap states near the Fermi level.
Algorithm Development for a Real-Time Military Noise Monitor
2006-03-24
Duration ESLM Enhanced Sound Level Meter ERDC-CERL Engineer Research and Development Center/Construction Engineering Research Laboratory FFT...Fast Fourier Transform FTIG Fort Indiantown Gap Kurt Kurtosis LD Larson Davis Leq Equivalent Sound Level L8eq 8-hr Equivalent...Sound Level Lpk Peak Sound Level m Spectral Slope MCBCL Marine Corps Base Camp Lejeune Neg Number of negative samples NI National
ERIC Educational Resources Information Center
Grandell, Linda
2005-01-01
Computer science is becoming increasingly important in our society. Meta skills, such as problem solving and logical and algorithmic thinking, are emphasized in every field, not only in the natural sciences. Still, largely due to gaps in tuition, common misunderstandings exist about the true nature of computer science. These are especially…
2D photonic crystal complete band gap search using a cyclic cellular automaton refination
NASA Astrophysics Data System (ADS)
González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.
2014-11-01
We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.
An approach to develop an algorithm to detect the climbing height in radial-axial ring rolling
NASA Astrophysics Data System (ADS)
Husmann, Simon; Hohmann, Magnus; Kuhlenkötter, Bernd
2017-10-01
Radial-axial ring rolling is the mainly used forming process to produce seamless rings, which are applied in miscellaneous industries like the energy sector, the aerospace technology or in the automotive industry. Due to the simultaneously forming in two opposite rolling gaps and the fact that ring rolling is a mass forming process, different errors could occur during the rolling process. Ring climbing is one of the most occurring process errors leading to a distortion of the ring's cross section and a deformation of the rings geometry. The conventional sensors of a radial-axial rolling machine could not detect this error. Therefore, it is a common strategy to roll a slightly bigger ring, so that random occurring process errors could be reduce afterwards by removing the additional material. The LPS installed an image processing system to the radial rolling gap of their ring rolling machine to enable the recognition and measurement of climbing rings and by this, to reduce the additional material. This paper presents the algorithm which enables the image processing system to detect the error of a climbing ring and ensures comparable reliable results for the measurement of the climbing height of the rings.
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F
2011-03-03
The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.
Fast Transformation of Temporal Plans for Efficient Execution
NASA Technical Reports Server (NTRS)
Tsamardinos, Ioannis; Muscettola, Nicola; Morris, Paul
1998-01-01
Temporal plans permit significant flexibility in specifying the occurrence time of events. Plan execution can make good use of that flexibility. However, the advantage of execution flexibility is counterbalanced by the cost during execution of propagating the time of occurrence of events throughout the flexible plan. To minimize execution latency, this propagation needs to be very efficient. Previous work showed that every temporal plan can be reformulated as a dispatchable plan, i.e., one for which propagation to immediate neighbors is sufficient. A simple algorithm was given that finds a dispatchable plan with a minimum number of edges in cubic time and quadratic space. In this paper, we focus on the efficiency of the reformulation process, and improve on that result. A new algorithm is presented that uses linear space and has time complexity equivalent to Johnson s algorithm for all-pairs shortest-path problems. Experimental evidence confirms the practical effectiveness of the new algorithm. For example, on a large commercial application, the performance is improved by at least two orders of magnitude. We further show that the dispatchable plan, already minimal in the total number of edges, can also be made minimal in the maximum number of edges incoming or outgoing at any node.
Genetic Algorithm Optimizes Q-LAW Control Parameters
NASA Technical Reports Server (NTRS)
Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard
2008-01-01
A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.
Effects of window size and shape on accuracy of subpixel centroid estimation of target images
NASA Technical Reports Server (NTRS)
Welch, Sharon S.
1993-01-01
A new algorithm is presented for increasing the accuracy of subpixel centroid estimation of (nearly) point target images in cases where the signal-to-noise ratio is low and the signal amplitude and shape vary from frame to frame. In the algorithm, the centroid is calculated over a data window that is matched in width to the image distribution. Fourier analysis is used to explain the dependency of the centroid estimate on the size of the data window, and simulation and experimental results are presented which demonstrate the effects of window size for two different noise models. The effects of window shape were also investigated for uniform and Gaussian-shaped windows. The new algorithm was developed to improve the dynamic range of a close-range photogrammetric tracking system that provides feedback for control of a large gap magnetic suspension system (LGMSS).
Real-time image annotation by manifold-based biased Fisher discriminant analysis
NASA Astrophysics Data System (ADS)
Ji, Rongrong; Yao, Hongxun; Wang, Jicheng; Sun, Xiaoshuai; Liu, Xianming
2008-01-01
Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval. However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S) problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords) selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.
Batool, Nazre; Chellappa, Rama
2014-09-01
Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.
Automatic detection of zebra crossings from mobile LiDAR data
NASA Astrophysics Data System (ADS)
Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.
2015-07-01
An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.
An Optimal Schedule for Urban Road Network Repair Based on the Greedy Algorithm
Lu, Guangquan; Xiong, Ying; Wang, Yunpeng
2016-01-01
The schedule of urban road network recovery caused by rainstorms, snow, and other bad weather conditions, traffic incidents, and other daily events is essential. However, limited studies have been conducted to investigate this problem. We fill this research gap by proposing an optimal schedule for urban road network repair with limited repair resources based on the greedy algorithm. Critical links will be given priority in repair according to the basic concept of the greedy algorithm. In this study, the link whose restoration produces the ratio of the system-wide travel time of the current network to the worst network is the minimum. We define such a link as the critical link for the current network. We will re-evaluate the importance of damaged links after each repair process is completed. That is, the critical link ranking will be changed along with the repair process because of the interaction among links. We repair the most critical link for the specific network state based on the greedy algorithm to obtain the optimal schedule. The algorithm can still quickly obtain an optimal schedule even if the scale of the road network is large because the greedy algorithm can reduce computational complexity. We prove that the problem can obtain the optimal solution using the greedy algorithm in theory. The algorithm is also demonstrated in the Sioux Falls network. The problem discussed in this paper is highly significant in dealing with urban road network restoration. PMID:27768732
Faster Parameterized Algorithms for Minor Containment
NASA Astrophysics Data System (ADS)
Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.
The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.
Far-infrared spectrophotometer for astronomical observations
NASA Technical Reports Server (NTRS)
Moseley, H.; Silverberg, R. F.
1981-01-01
A liquid-helium-cooled far infrared spectrophotometer was built and used to make low resolution observations of the continua of several kinds of astronomical objects using the Kuiper Airborne Observatory. This instrument fills a gap in both sensitivity to continuum sources and spectral resolution between the broadband photometers with lambda/Delta lambda approximately 1 and spectrometers with lambda/Delta lambda greater than 50. While designed primarily to study planetary nebulae, the instrument permits study of the shape of the continua of many weak sources which cannot easily be observed with high resolution systems.
Classical Statistics and Statistical Learning in Imaging Neuroscience
Bzdok, Danilo
2017-01-01
Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Cooper, J. E.; Wright, J. R.
1987-01-01
A modification to the Eigensystem Realization Algorithm (ERA) for modal parameter identification is presented in this paper. The ERA minimum order realization approach using singular value decomposition is combined with the philosophy of the Correlation Fit method in state space form such that response data correlations rather than actual response values are used for modal parameter identification. This new method, the ERA using data correlations (ERA/DC), reduces bias errors due to noise corruption significantly without the need for model overspecification. This method is tested using simulated five-degree-of-freedom system responses corrupted by measurement noise. It is found for this case that, when model overspecification is permitted and a minimum order solution obtained via singular value truncation, the results from the two methods are of similar quality.
Systems aspects of COBE science data compression
NASA Technical Reports Server (NTRS)
Freedman, I.; Boggess, E.; Seiler, E.
1993-01-01
A general approach to compression of diverse data from large scientific projects has been developed and this paper addresses the appropriate system and scientific constraints together with the algorithm development and test strategy. This framework has been implemented for the COsmic Background Explorer spacecraft (COBE) by retrofitting the existing VAS-based data management system with high-performance compression software permitting random access to the data. Algorithms which incorporate scientific knowledge and consume relatively few system resources are preferred over ad hoc methods. COBE exceeded its planned storage by a large and growing factor and the retrieval of data significantly affects the processing, delaying the availability of data for scientific usage and software test. Embedded compression software is planned to make the project tractable by reducing the data storage volume to an acceptable level during normal processing.
Full cycle rapid scan EPR deconvolution algorithm.
Tseytlin, Mark
2017-08-01
Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan. Copyright © 2017 Elsevier Inc. All rights reserved.
Full cycle rapid scan EPR deconvolution algorithm
NASA Astrophysics Data System (ADS)
Tseytlin, Mark
2017-08-01
Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan.
Informatics methods to enable sharing of quantitative imaging research data.
Levy, Mia A; Freymann, John B; Kirby, Justin S; Fedorov, Andriy; Fennessy, Fiona M; Eschrich, Steven A; Berglund, Anders E; Fenstermacher, David A; Tan, Yongqiang; Guo, Xiaotao; Casavant, Thomas L; Brown, Bartley J; Braun, Terry A; Dekker, Andre; Roelofs, Erik; Mountz, James M; Boada, Fernando; Laymon, Charles; Oborski, Matt; Rubin, Daniel L
2012-11-01
The National Cancer Institute Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community. We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN. There are a variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network. As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers. Copyright © 2012 Elsevier Inc. All rights reserved.
He, Xiao-Ou; D'Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan
2015-03-12
Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. We examined how two different SIAs may influence decision making among primary-care physicians. Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a 'normal' interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently.
Indirect Identification of Linear Stochastic Systems with Known Feedback Dynamics
NASA Technical Reports Server (NTRS)
Huang, Jen-Kuang; Hsiao, Min-Hung; Cox, David E.
1996-01-01
An algorithm is presented for identifying a state-space model of linear stochastic systems operating under known feedback controller. In this algorithm, only the reference input and output of closed-loop data are required. No feedback signal needs to be recorded. The overall closed-loop system dynamics is first identified. Then a recursive formulation is derived to compute the open-loop plant dynamics from the identified closed-loop system dynamics and known feedback controller dynamics. The controller can be a dynamic or constant-gain full-state feedback controller. Numerical simulations and test data of a highly unstable large-gap magnetic suspension system are presented to demonstrate the feasibility of this indirect identification method.
A Comparative Study of Interferometric Regridding Algorithms
NASA Technical Reports Server (NTRS)
Hensley, Scott; Safaeinili, Ali
1999-01-01
THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.
Autonomous Flight Safety System - Phase III
NASA Technical Reports Server (NTRS)
2008-01-01
The Autonomous Flight Safety System (AFSS) is a joint KSC and Wallops Flight Facility project that uses tracking and attitude data from onboard Global Positioning System (GPS) and inertial measurement unit (IMU) sensors and configurable rule-based algorithms to make flight termination decisions. AFSS objectives are to increase launch capabilities by permitting launches from locations without range safety infrastructure, reduce costs by eliminating some downrange tracking and communication assets, and reduce the reaction time for flight termination decisions.
[The etiological differentiation of neuromuscular produced dysphagia by x-ray cinematography].
Brühlmann, W
1991-12-01
850 patients with dysphagia were examined by x-ray cinematography. On the basis of these examinations the normal events of swallowing are compared with the abnormalities observed. The technique is described. An algorithm has been developed depending on the presence of symmetry or asymmetry of the abnormalities and on muscle tone, which permits classification of the various aetiological groups. In addition, specific features of individual diseases often make it possible to arrive at a definite diagnosis.
Powered Descent Trajectory Guidance and Some Considerations for Human Lunar Landing
NASA Technical Reports Server (NTRS)
Sostaric, Ronald R.
2007-01-01
The Autonomous Precision Landing and Hazard Detection and Avoidance Technology development (ALHAT) will enable an accurate (better than 100m) landing on the lunar surface. This technology will also permit autonomous (independent from ground) avoidance of hazards detected in real time. A preliminary trajectory guidance algorithm capable of supporting these tasks has been developed and demonstrated in simulations. Early results suggest that with expected improvements in sensor technology and lunar mapping, mission objectives are achievable.
A cloud masking algorithm for EARLINET lidar systems
NASA Astrophysics Data System (ADS)
Binietoglou, Ioannis; Baars, Holger; D'Amico, Giuseppe; Nicolae, Doina
2015-04-01
Cloud masking is an important first step in any aerosol lidar processing chain as most data processing algorithms can only be applied on cloud free observations. Up to now, the selection of a cloud-free time interval for data processing is typically performed manually, and this is one of the outstanding problems for automatic processing of lidar data in networks such as EARLINET. In this contribution we present initial developments of a cloud masking algorithm that permits the selection of the appropriate time intervals for lidar data processing based on uncalibrated lidar signals. The algorithm is based on a signal normalization procedure using the range of observed values of lidar returns, designed to work with different lidar systems with minimal user input. This normalization procedure can be applied to measurement periods of only few hours, even if no suitable cloud-free interval exists, and thus can be used even when only a short period of lidar measurements is available. Clouds are detected based on a combination of criteria including the magnitude of the normalized lidar signal and time-space edge detection performed using the Sobel operator. In this way the algorithm avoids misclassification of strong aerosol layers as clouds. Cloud detection is performed using the highest available time and vertical resolution of the lidar signals, allowing the effective detection of low-level clouds (e.g. cumulus humilis). Special attention is given to suppress false cloud detection due to signal noise that can affect the algorithm's performance, especially during day-time. In this contribution we present the details of algorithm, the effect of lidar characteristics (space-time resolution, available wavelengths, signal-to-noise ratio) to detection performance, and highlight the current strengths and limitations of the algorithm using lidar scenes from different lidar systems in different locations across Europe.
Fry, Jillian P; Laestadius, Linnea I; Grechis, Clare; Nachman, Keeve E; Neff, Roni A
2014-01-01
Industrial food animal production (IFAP) operations adversely impact environmental public health through air, water, and soil contamination. We sought to determine how state permitting and agriculture agencies respond to these public health concerns. We conducted semi-structured qualitative interviews with staff at 12 state agencies in seven states, which were chosen based on high numbers or rapid increase of IFAP operations. The interviews served to gather information regarding agency involvement in regulating IFAP operations, the frequency and type of contacts received about public health concerns, how the agency responds to such contacts, and barriers to additional involvement. Permitting and agriculture agencies' responses to health-based IFAP concerns are constrained by significant barriers including narrow regulations, a lack of public health expertise within the agencies, and limited resources. State agencies with jurisdiction over IFAP operations are unable to adequately address relevant public health concerns due to multiple factors. Combining these results with previously published findings on barriers facing local and state health departments in the same states reveals significant gaps between these agencies regarding public health and IFAP. There is a clear need for regulations to protect public health and for public health professionals to provide complementary expertise to agencies responsible for regulating IFAP operations.
Han, Zhaoying; Thornton-Wells, Tricia A.; Dykens, Elisabeth M.; Gore, John C.; Dawant, Benoit M.
2014-01-01
Deformation Based Morphometry (DBM) is a widely used method for characterizing anatomical differences across groups. DBM is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to a DBM atlas. Although several studies have compared non-rigid registration algorithms for segmentation tasks, few studies have compared the effect of the registration algorithms on group differences that may be uncovered through DBM. In this study, we compared group atlas creation and DBM results obtained with five well-established non-rigid registration algorithms using thirteen subjects with Williams Syndrome (WS) and thirteen Normal Control (NC) subjects. The five non-rigid registration algorithms include: (1) The Adaptive Bases Algorithm (ABA); (2) The Image Registration Toolkit (IRTK); (3) The FSL Nonlinear Image Registration Tool (FSL); (4) The Automatic Registration Tool (ART); and (5) the normalization algorithm available in SPM8. Results indicate that the choice of algorithm has little effect on the creation of group atlases. However, regions of differences between groups detected with DBM vary from algorithm to algorithm both qualitatively and quantitatively. The unique nature of the data set used in this study also permits comparison of visible anatomical differences between the groups and regions of difference detected by each algorithm. Results show that the interpretation of DBM results is difficult. Four out of the five algorithms we have evaluated detect bilateral differences between the two groups in the insular cortex, the basal ganglia, orbitofrontal cortex, as well as in the cerebellum. These correspond to differences that have been reported in the literature and that are visible in our samples. But our results also show that some algorithms detect regions that are not detected by the others and that the extent of the detected regions varies from algorithm to algorithm. These results suggest that using more than one algorithm when performing DBM studies would increase confidence in the results. Properties of the algorithms such as the similarity measure they maximize and the regularity of the deformation fields, as well as the location of differences detected with DBM, also need to be taken into account in the interpretation process. PMID:22459439
BrainIACS: a system for web-based medical image processing
NASA Astrophysics Data System (ADS)
Kishore, Bhaskar; Bazin, Pierre-Louis; Pham, Dzung L.
2009-02-01
We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available, open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are supported. Some elements such as the input box support input validation. Changes to the user interface such as addition and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after compilation of the front-end code), it is platform independent with the only requirements being that a Servlet implementation be available and that the processing algorithms can execute on the server platform.
Adaptive Gaussian mixture models for pre-screening in GPR data
NASA Astrophysics Data System (ADS)
Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.
2011-06-01
Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.
Ciesielski, Krzysztof Chris; Udupa, Jayaram K.
2011-01-01
In the current vast image segmentation literature, there seems to be considerable redundancy among algorithms, while there is a serious lack of methods that would allow their theoretical comparison to establish their similarity, equivalence, or distinctness. In this paper, we make an attempt to fill this gap. To accomplish this goal, we argue that: (1) every digital segmentation algorithm A should have a well defined continuous counterpart MA, referred to as its model, which constitutes an asymptotic of A when image resolution goes to infinity; (2) the equality of two such models MA and MA′ establishes a theoretical (asymptotic) equivalence of their digital counterparts A and A′. Such a comparison is of full theoretical value only when, for each involved algorithm A, its model MA is proved to be an asymptotic of A. So far, such proofs do not appear anywhere in the literature, even in the case of algorithms introduced as digitizations of continuous models, like level set segmentation algorithms. The main goal of this article is to explore a line of investigation for formally pairing the digital segmentation algorithms with their asymptotic models, justifying such relations with mathematical proofs, and using the results to compare the segmentation algorithms in this general theoretical framework. As a first step towards this general goal, we prove here that the gradient based thresholding model M∇ is the asymptotic for the fuzzy connectedness Udupa and Samarasekera segmentation algorithm used with gradient based affinity A∇. We also argue that, in a sense, M∇ is the asymptotic for the original front propagation level set algorithm of Malladi, Sethian, and Vemuri, thus establishing a theoretical equivalence between these two specific algorithms. Experimental evidence of this last equivalence is also provided. PMID:21442014
Adiabatic quantum computation along quasienergies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, Atushi; Nemoto, Kae; National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda ku, Tokyo 101-8430
2010-02-15
The parametric deformations of quasienergies and eigenvectors of unitary operators are applied to the design of quantum adiabatic algorithms. The conventional, standard adiabatic quantum computation proceeds along eigenenergies of parameter-dependent Hamiltonians. By contrast, discrete adiabatic computation utilizes adiabatic passage along the quasienergies of parameter-dependent unitary operators. For example, such computation can be realized by a concatenation of parameterized quantum circuits, with an adiabatic though inevitably discrete change of the parameter. A design principle of adiabatic passage along quasienergy was recently proposed: Cheon's quasienergy and eigenspace anholonomies on unitary operators is available to realize anholonomic adiabatic algorithms [A. Tanaka and M.more » Miyamoto, Phys. Rev. Lett. 98, 160407 (2007)], which compose a nontrivial family of discrete adiabatic algorithms. It is straightforward to port a standard adiabatic algorithm to an anholonomic adiabatic one, except an introduction of a parameter |v>, which is available to adjust the gaps of the quasienergies to control the running time steps. In Grover's database search problem, the costs to prepare |v> for the qualitatively different (i.e., power or exponential) running time steps are shown to be qualitatively different.« less
Improving the Held and Karp Approach with Constraint Programming
NASA Astrophysics Data System (ADS)
Benchimol, Pascal; Régin, Jean-Charles; Rousseau, Louis-Martin; Rueher, Michel; van Hoeve, Willem-Jan
Held and Karp have proposed, in the early 1970s, a relaxation for the Traveling Salesman Problem (TSP) as well as a branch-and-bound procedure that can solve small to modest-size instances to optimality [4, 5]. It has been shown that the Held-Karp relaxation produces very tight bounds in practice, and this relaxation is therefore applied in TSP solvers such as Concorde [1]. In this short paper we show that the Held-Karp approach can benefit from well-known techniques in Constraint Programming (CP) such as domain filtering and constraint propagation. Namely, we show that filtering algorithms developed for the weighted spanning tree constraint [3, 8] can be adapted to the context of the Held and Karp procedure. In addition to the adaptation of existing algorithms, we introduce a special-purpose filtering algorithm based on the underlying mechanisms used in Prim's algorithm [7]. Finally, we explored two different branching schemes to close the integrality gap. Our initial experimental results indicate that the addition of the CP techniques to the Held-Karp method can be very effective.
Certification Considerations for Adaptive Systems
NASA Technical Reports Server (NTRS)
Bhattacharyya, Siddhartha; Cofer, Darren; Musliner, David J.; Mueller, Joseph; Engstrom, Eric
2015-01-01
Advanced capabilities planned for the next generation of aircraft, including those that will operate within the Next Generation Air Transportation System (NextGen), will necessarily include complex new algorithms and non-traditional software elements. These aircraft will likely incorporate adaptive control algorithms that will provide enhanced safety, autonomy, and robustness during adverse conditions. Unmanned aircraft will operate alongside manned aircraft in the National Airspace (NAS), with intelligent software performing the high-level decision-making functions normally performed by human pilots. Even human-piloted aircraft will necessarily include more autonomy. However, there are serious barriers to the deployment of new capabilities, especially for those based upon software including adaptive control (AC) and artificial intelligence (AI) algorithms. Current civil aviation certification processes are based on the idea that the correct behavior of a system must be completely specified and verified prior to operation. This report by Rockwell Collins and SIFT documents our comprehensive study of the state of the art in intelligent and adaptive algorithms for the civil aviation domain, categorizing the approaches used and identifying gaps and challenges associated with certification of each approach.
Comparison of various contact algorithms for poroelastic tissues.
Galbusera, Fabio; Bashkuev, Maxim; Wilke, Hans-Joachim; Shirazi-Adl, Aboulfazl; Schmidt, Hendrik
2014-01-01
Capabilities of the commercial finite element package ABAQUS in simulating frictionless contact between two saturated porous structures were evaluated and compared with those of an open source code, FEBio. In ABAQUS, both the default contact implementation and another algorithm based on an iterative approach requiring script programming were considered. Test simulations included a patch test of two cylindrical slabs in a gapless contact and confined compression conditions; a confined compression test of a porous cylindrical slab with a spherical porous indenter; and finally two unconfined compression tests of soft tissues mimicking diarthrodial joints. The patch test showed almost identical results for all algorithms. On the contrary, the confined and unconfined compression tests demonstrated large differences related to distinct physical and boundary conditions considered in each of the three contact algorithms investigated in this study. In general, contact with non-uniform gaps between fluid-filled porous structures could be effectively simulated with either ABAQUS or FEBio. The user should be aware of the parameter definitions, assumptions and limitations in each case, and take into consideration the physics and boundary conditions of the problem of interest when searching for the most appropriate model.
Overview of the current status of genetically modified plants in Europe as compared to the USA.
Brandt, Peter
2003-07-01
Genetically modified crops have been tested in 1,726 experimental releases in the EU member states and in 7,815 experimental releases in the USA. The global commercial cultivation area of genetically modified crops is likely to reach 50 million hectares in 2001, however, the commercial production of genetically modified crops in the EU amounts to only a few thousand hectares and accounts for only some 0.03% of the world production. A significant gap exists between the more than fifty genetically modified crop species already permitted to be cultivated and to be placed on the market in the USA, Canada and other countries and the five genetically modified crop species permitted for the same use in the EU member states, which are still pending inclusion in the Common Catalogue of agricultural plant species. The further development of the "green gene technology" in the EU will be a matter of public acceptance and administrative legislation.
Satellite Snow-Cover Mapping: A Brief Review
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.
1995-01-01
Satellite snow mapping has been accomplished since 1966, initially using data from the reflective part of the electromagnetic spectrum, and now also employing data from the microwave part of the spectrum. Visible and near-infrared sensors can provide excellent spatial resolution from space enabling detailed snow mapping. When digital elevation models are also used, snow mapping can provide realistic measurements of snow extent even in mountainous areas. Passive-microwave satellite data permit global snow cover to be mapped on a near-daily basis and estimates of snow depth to be made, but with relatively poor spatial resolution (approximately 25 km). Dense forest cover limits both techniques and optical remote sensing is limited further by cloudcover conditions. Satellite remote sensing of snow cover with imaging radars is still in the early stages of research, but shows promise at least for mapping wet or melting snow using C-band (5.3 GHz) synthetic aperture radar (SAR) data. Observing System (EOS) Moderate Resolution Imaging Spectroradiometer (MODIS) data beginning with the launch of the first EOS platform in 1998. Digital maps will be produced that will provide daily, and maximum weekly global snow, sea ice and lake ice cover at 1-km spatial resolution. Statistics will be generated on the extent and persistence of snow or ice cover in each pixel for each weekly map, cloudcover permitting. It will also be possible to generate snow- and ice-cover maps using MODIS data at 250- and 500-m resolution, and to study and map snow and ice characteristics such as albedo. been under development. Passive-microwave data offer the potential for determining not only snow cover, but snow water equivalent, depth and wetness under all sky conditions. A number of algorithms have been developed to utilize passive-microwave brightness temperatures to provide information on snow cover and water equivalent. The variability of vegetative Algorithms are being developed to map global snow and ice cover using Earth Algorithms to map global snow cover using passive-microwave data have also cover and of snow grain size, globally, limits the utility of a single algorithm to map global snow cover.
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa; Gholami, Amin
2015-06-01
Free fluid porosity and rock permeability, undoubtedly the most critical parameters of hydrocarbon reservoir, could be obtained by processing of nuclear magnetic resonance (NMR) log. Despite conventional well logs (CWLs), NMR logging is very expensive and time-consuming. Therefore, idea of synthesizing NMR log from CWLs would be of a great appeal among reservoir engineers. For this purpose, three optimization strategies are followed. Firstly, artificial neural network (ANN) is optimized by virtue of hybrid genetic algorithm-pattern search (GA-PS) technique, then fuzzy logic (FL) is optimized by means of GA-PS, and eventually an alternative condition expectation (ACE) model is constructed using the concept of committee machine to combine outputs of optimized and non-optimized FL and ANN models. Results indicated that optimization of traditional ANN and FL model using GA-PS technique significantly enhances their performances. Furthermore, the ACE committee of aforementioned models produces more accurate and reliable results compared with a singular model performing alone.
Bickhart, Derek M; Rosen, Benjamin D; Koren, Sergey; Sayre, Brian L; Hastie, Alex R; Chan, Saki; Lee, Joyce; Lam, Ernest T; Liachko, Ivan; Sullivan, Shawn T; Burton, Joshua N; Huson, Heather J; Nystrom, John C; Kelley, Christy M; Hutchison, Jana L; Zhou, Yang; Sun, Jiajie; Crisà, Alessandra; Ponce de León, F Abel; Schwartz, John C; Hammond, John A; Waldbieser, Geoffrey C; Schroeder, Steven G; Liu, George E; Dunham, Maitreya J; Shendure, Jay; Sonstegard, Tad S; Phillippy, Adam M; Van Tassell, Curtis P; Smith, Timothy P L
2017-04-01
The decrease in sequencing cost and increased sophistication of assembly algorithms for short-read platforms has resulted in a sharp increase in the number of species with genome assemblies. However, these assemblies are highly fragmented, with many gaps, ambiguities, and errors, impeding downstream applications. We demonstrate current state of the art for de novo assembly using the domestic goat (Capra hircus) based on long reads for contig formation, short reads for consensus validation, and scaffolding by optical and chromatin interaction mapping. These combined technologies produced what is, to our knowledge, the most continuous de novo mammalian assembly to date, with chromosome-length scaffolds and only 649 gaps. Our assembly represents a ∼400-fold improvement in continuity due to properly assembled gaps, compared to the previously published C. hircus assembly, and better resolves repetitive structures longer than 1 kb, representing the largest repeat family and immune gene complex yet produced for an individual of a ruminant species.
Bickhart, Derek M.; Rosen, Benjamin D.; Koren, Sergey; Sayre, Brian L.; Hastie, Alex R.; Chan, Saki; Lee, Joyce; Lam, Ernest T.; Liachko, Ivan; Sullivan, Shawn T.; Burton, Joshua N.; Huson, Heather J.; Nystrom, John C.; Kelley, Christy M.; Hutchison, Jana L.; Zhou, Yang; Sun, Jiajie; Crisà, Alessandra; de León, F. Abel Ponce; Schwartz, John C.; Hammond, John A.; Waldbieser, Geoffrey C.; Schroeder, Steven G.; Liu, George E.; Dunham, Maitreya J.; Shendure, Jay; Sonstegard, Tad S.; Phillippy, Adam M.; Van Tassell, Curtis P.; Smith, Timothy P.L.
2018-01-01
The decrease in sequencing cost and increased sophistication of assembly algorithms for short-read platforms has resulted in a sharp increase in the number of species with genome assemblies. However, these assemblies are highly fragmented, with many gaps, ambiguities, and errors, impeding downstream applications. We demonstrate current state of the art for de novo assembly using the domestic goat (Capra hircus), based on long reads for contig formation, short reads for consensus validation, and scaffolding by optical and chromatin interaction mapping. These combined technologies produced the most continuous de novo mammalian assembly to date, with chromosome-length scaffolds and only 649 gaps. Our assembly represents a ~400-fold improvement in continuity due to properly assembled gaps compared to the previously published C. hircus assembly, and better resolves repetitive structures longer than 1 kb, representing the largest repeat family and immune gene complex ever produced for an individual of a ruminant species. PMID:28263316
Orbit design and optimization based on global telecommunication performance metrics
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.
2006-01-01
The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.
High pressure structural stability of the Na-Te system
NASA Astrophysics Data System (ADS)
Wang, Youchun; Tian, Fubo; Li, Da; Duan, Defang; Xie, Hui; Liu, Bingbing; Zhou, Qiang; Cui, Tian
2018-03-01
The ab initio evolutionary algorithm is used to search for all thermodynamically stable Na-Te compounds at extreme pressure. In our calculations, several new structures are discovered at high pressure, namely, Imma Na2Te, Pmmm NaTe, Imma Na8Te2 and P4/mmm NaTe3. Like the known structures of Na2Te (Fm-3m, Pnma and P63/mmc), the Pmmm NaTe, Imma Na8Te2 and P4/mmm NaTe3 structures also show semiconductor properties with band-gap decreases when pressure increased. However, we find that the band-gap of Imma Na2Te structure increases with pressure. We presume that the result may be caused by the increasing of splitting between Te p states and Na s, Na p and Te d states. Furthermore, we think that the strong hybridization between Na p state and Te d state result in the band gap increasing with pressure.
Physics Mining of Multi-Source Data Sets
NASA Technical Reports Server (NTRS)
Helly, John; Karimabadi, Homa; Sipes, Tamara
2012-01-01
Powerful new parallel data mining algorithms can produce diagnostic and prognostic numerical models and analyses from observational data. These techniques yield higher-resolution measures than ever before of environmental parameters by fusing synoptic imagery and time-series measurements. These techniques are general and relevant to observational data, including raster, vector, and scalar, and can be applied in all Earth- and environmental science domains. Because they can be highly automated and are parallel, they scale to large spatial domains and are well suited to change and gap detection. This makes it possible to analyze spatial and temporal gaps in information, and facilitates within-mission replanning to optimize the allocation of observational resources. The basis of the innovation is the extension of a recently developed set of algorithms packaged into MineTool to multi-variate time-series data. MineTool is unique in that it automates the various steps of the data mining process, thus making it amenable to autonomous analysis of large data sets. Unlike techniques such as Artificial Neural Nets, which yield a blackbox solution, MineTool's outcome is always an analytical model in parametric form that expresses the output in terms of the input variables. This has the advantage that the derived equation can then be used to gain insight into the physical relevance and relative importance of the parameters and coefficients in the model. This is referred to as physics-mining of data. The capabilities of MineTool are extended to include both supervised and unsupervised algorithms, handle multi-type data sets, and parallelize it.
Thakar, Manjusha; Howard, Jason D.; Kagohara, Luciane T.; Krigsfeld, Gabriel; Ranaweera, Ruchira S.; Hughes, Robert M.; Perez, Jimena; Jones, Siân; Favorov, Alexander V.; Carey, Jacob; Stein-O'Brien, Genevieve; Gaykalova, Daria A.; Ochs, Michael F.; Chung, Christine H.
2016-01-01
Patients with oncogene driven tumors are treated with targeted therapeutics including EGFR inhibitors. Genomic data from The Cancer Genome Atlas (TCGA) demonstrates molecular alterations to EGFR, MAPK, and PI3K pathways in previously untreated tumors. Therefore, this study uses bioinformatics algorithms to delineate interactions resulting from EGFR inhibitor use in cancer cells with these genetic alterations. We modify the HaCaT keratinocyte cell line model to simulate cancer cells with constitutive activation of EGFR, HRAS, and PI3K in a controlled genetic background. We then measure gene expression after treating modified HaCaT cells with gefitinib, afatinib, and cetuximab. The CoGAPS algorithm distinguishes a gene expression signature associated with the anticipated silencing of the EGFR network. It also infers a feedback signature with EGFR gene expression itself increasing in cells that are responsive to EGFR inhibitors. This feedback signature has increased expression of several growth factor receptors regulated by the AP-2 family of transcription factors. The gene expression signatures for AP-2alpha are further correlated with sensitivity to cetuximab treatment in HNSCC cell lines and changes in EGFR expression in HNSCC tumors with low CDKN2A gene expression. In addition, the AP-2alpha gene expression signatures are also associated with inhibition of MEK, PI3K, and mTOR pathways in the Library of Integrated Network-Based Cellular Signatures (LINCS) data. These results suggest that AP-2 transcription factors are activated as feedback from EGFR network inhibition and may mediate EGFR inhibitor resistance. PMID:27650546
Genetic algorithm prediction of two-dimensional group-IV dioxides for dielectrics
NASA Astrophysics Data System (ADS)
Singh, Arunima K.; Revard, Benjamin C.; Ramanathan, Rohit; Ashton, Michael; Tavazza, Francesca; Hennig, Richard G.
2017-04-01
Two-dimensional (2D) materials present a new class of materials whose structures and properties can differ from their bulk counterparts. We perform a genetic algorithm structure search using density-functional theory to identify low-energy structures of 2D group-IV dioxides A O2 (A =Si , Ge, Sn, Pb). We find that 2D SiO2 is most stable in the experimentally determined bi-tetrahedral structure, while 2D SnO2 and PbO2 are most stable in the 1 T structure. For 2D GeO2, the genetic algorithm finds a new low-energy 2D structure with monoclinic symmetry. Each system exhibits 2D structures with formation energies ranging from 26 to 151 meV/atom, below those of certain already synthesized 2D materials. The phonon spectra confirm their dynamic stability. Using the HSE06 hybrid functional, we determine that the 2D dioxides are insulators or semiconductors, with a direct band gap of 7.2 eV at Γ for 2D SiO2, and indirect band gaps of 4.8-2.7 eV for the other dioxides. To guide future applications of these 2D materials in nanoelectronic devices, we determine their band-edge alignment with graphene, phosphorene, and single-layer BN and MoS2. An assessment of the dielectric properties and electrochemical stability of the 2D group-IV dioxides shows that 2D GeO2 and SnO2 are particularly promising candidates for gate oxides and 2D SnO2 also as a protective layer in heterostructure nanoelectronic devices.
Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
2012-09-25
Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
Analysis of Automated Aircraft Conflict Resolution and Weather Avoidance
NASA Technical Reports Server (NTRS)
Love, John F.; Chan, William N.; Lee, Chu Han
2009-01-01
This paper describes an analysis of using trajectory-based automation to resolve both aircraft and weather constraints for near-term air traffic management decision making. The auto resolution algorithm developed and tested at NASA-Ames to resolve aircraft to aircraft conflicts has been modified to mitigate convective weather constraints. Modifications include adding information about the size of a gap between weather constraints to the routing solution. Routes that traverse gaps that are smaller than a specific size are not used. An evaluation of the performance of the modified autoresolver to resolve both conflicts with aircraft and weather was performed. Integration with the Center-TRACON Traffic Management System was completed to evaluate the effect of weather routing on schedule delays.
NASA Astrophysics Data System (ADS)
Verstraete, Hans R. G. W.; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel J.; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Sarunic, Marinko V.; Verhaegen, Michel; Jian, Yifan
2017-02-01
Optical Coherence Tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. A limitation of the performance and utilization of the OCT systems has been the lateral resolution. Through the combination of wavefront sensorless adaptive optics with dual variable optical elements, we present a compact lens based OCT system that is capable of imaging the photoreceptor mosaic. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient eyes, and a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators for aberration correction to obtain near diffraction limited imaging at the retina. A parallel processing computational platform permitted real-time image acquisition and display. The Data-based Online Nonlinear Extremum seeker (DONE) algorithm was used for real time optimization of the wavefront sensorless adaptive optics OCT, and the performance was compared with a coordinate search algorithm. Cross sectional images of the retinal layers and en face images of the cone photoreceptor mosaic acquired in vivo from research volunteers before and after WSAO optimization are presented. Applying the DONE algorithm in vivo for wavefront sensorless AO-OCT demonstrates that the DONE algorithm succeeds in drastically improving the signal while achieving a computational time of 1 ms per iteration, making it applicable for high speed real time applications.
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
Inverting Monotonic Nonlinearities by Entropy Maximization
López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261
Inverting Monotonic Nonlinearities by Entropy Maximization.
Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
Optimized design of embedded DSP system hardware supporting complex algorithms
NASA Astrophysics Data System (ADS)
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollias, Pavlos
This is a multi-institutional, collaborative project using a three-tier modeling approach to bridge field observations and global cloud-permitting models, with emphases on cloud population structural evolution through various large-scale environments. Our contribution was in data analysis for the generation of high value cloud and precipitation products and derive cloud statistics for model validation. There are two areas in data analysis that we contributed: the development of a synergistic cloud and precipitation cloud classification that identify different cloud (e.g. shallow cumulus, cirrus) and precipitation types (shallow, deep, convective, stratiform) using profiling ARM observations and the development of a quantitative precipitation ratemore » retrieval algorithm using profiling ARM observations. Similar efforts have been developed in the past for precipitation (weather radars), but not for the millimeter-wavelength (cloud) radar deployed at the ARM sites.« less
Automatic identification of cochlear implant electrode arrays for post-operative assessment
NASA Astrophysics Data System (ADS)
Noble, Jack H.; Schuman, Theodore A.; Wright, Charles G.; Labadie, Robert F.; Dawant, Benoit M.
2011-03-01
Cochlear implantation is a procedure performed to treat profound hearing loss. Accurately determining the postoperative position of the implant in vivo would permit studying the correlations between implant position and hearing restoration. To solve this problem, we present an approach based on parametric Gradient Vector Flow snakes to segment the electrode array in post-operative CT. By combining this with existing methods for localizing intra-cochlear anatomy, we have developed a system that permits accurate assessment of the implant position in vivo. The system is validated using a set of seven temporal bone specimens. The algorithms were run on pre- and post-operative CTs of the specimens, and the results were compared to histological images. It was found that the position of the arrays observed in the histological images is in excellent agreement with the position of their automatically generated 3D reconstructions in the CT scans.
Partitioning problems in parallel, pipelined and distributed computing
NASA Technical Reports Server (NTRS)
Bokhari, S.
1985-01-01
The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.
The NASA computer aided design and test system
NASA Technical Reports Server (NTRS)
Gould, J. M.; Juergensen, K.
1973-01-01
A family of computer programs facilitating the design, layout, evaluation, and testing of digital electronic circuitry is described. CADAT (computer aided design and test system) is intended for use by NASA and its contractors and is aimed predominantly at providing cost effective microelectronic subsystems based on custom designed metal oxide semiconductor (MOS) large scale integrated circuits (LSIC's). CADAT software can be easily adopted by installations with a wide variety of computer hardware configurations. Its structure permits ease of update to more powerful component programs and to newly emerging LSIC technologies. The components of the CADAT system are described stressing the interaction of programs rather than detail of coding or algorithms. The CADAT system provides computer aids to derive and document the design intent, includes powerful automatic layout software, permits detailed geometry checks and performance simulation based on mask data, and furnishes test pattern sequences for hardware testing.
Karen Schleeweis; Samuel N. Goward; Chengquan Huang; John L. Dwyer; Jennifer L. Dungan; Mary A. Lindsey; Andrew Michaelis; Khaldoun Rishmawi; Jeffery G. Masek
2016-01-01
Using the NASA Earth Exchange platform, the North American Forest Dynamics (NAFD) project mapped forest history wall-to-wall, annually for the contiguous US (1986-2010) using the Vegetation Change Tracker algorithm. As with any effort to identify real changes in remotely sensed time-series, data gaps, shifts in seasonality, misregistration, inconsistent radiometry and...
ERIC Educational Resources Information Center
Underwood, Sonia Miller
2011-01-01
The heart of learning chemistry is the ability to connect a compound's structure to its function; Lewis structures provide an essential link in this process. In many cases, their construction is taught using an algorithmic approach, containing a set of step-by-step rules. We believe that this approach is in direct conflict with the precepts of…
Mapping ionospheric observations using combined techniques for Europe region
NASA Astrophysics Data System (ADS)
Tomasik, Lukasz; Gulyaeva, Tamara; Stanislawska, Iwona; Swiatek, Anna; Pozoga, Mariusz; Dziak-Jankowska, Beata
An k nearest neighbours algorithm (KNN) was used for filling the gaps of the missing F2-layer critical frequency is proposed and applied. This method uses TEC data calculated from EGNOS Vertical Delay Estimate (VDE ≈0.78 TECU) and several GNSS stations and its spatial correlation whit data from selected ionosondes. For mapping purposes two-dimensional similarity function in KNN method was proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandage, A.
The galactic disk is a dissipative structure and must, therefore be younger than the halo if galaxy formation generally proceeds by collapse. Just how much younger the oldest stars in the galactic disk are than the oldest halo stars remains an open question. A fast collapse (on a time scale no longer than the rotation period of the extended protogalaxy) permits an age gap of the order of approximately 10 to the 9th power years. A slow collapse, governed by the cooling rate of the partially pressure supported falling gas that formed into what is now the thick stellar disk,more » permits a longer age gap, claimed by some to be as long as 6 Gyr. Early methods of age dating the oldest components of the disk contain implicit assumptions concerning the details of the age-metallicity relation for stars in the solar neighborhood. The discovery that this relation for open clusters outside the solar circle is different that in the solar neighborhood (Geisler 1987), complicates the earlier arguments. The oldest stars in the galactic disk are at least as old as NGC 188. The new data by Janes on NGC 6791, shown first at this conference, suggest a disk age of at least 12.5 Gyr, as do data near the main sequence termination point of metal rich, high proper motion stars of low orbital eccentricity. Hence, a case can still be made that the oldest part of the galactic thick disk is similar in age to the halo globular clusters, if their ages are the same as 47 Tuc.« less
NASA Technical Reports Server (NTRS)
Sandage, Allan
1988-01-01
The galactic disk is a dissipative structure and must, therefore be younger than the halo if galaxy formation generally proceeds by collapse. Just how much younger the oldest stars in the galactic disk are than the oldest halo stars remains an open question. A fast collapse (on a time scale no longer than the rotation period of the extended protogalaxy) permits an age gap of the order of approximately 10 to the 9th power years. A slow collapse, governed by the cooling rate of the partially pressure supported falling gas that formed into what is now the thick stellar disk, permits a longer age gap, claimed by some to be as long as 6 Gyr. Early methods of age dating the oldest components of the disk contain implicit assumptions concerning the details of the age-metallicity relation for stars in the solar neighborhood. The discovery that this relation for open clusters outside the solar circle is different that in the solar neighborhood (Geisler 1987), complicates the earlier arguments. The oldest stars in the galactic disk are at least as old as NGC 188. The new data by Janes on NGC 6791, shown first at this conference, suggest a disk age of at least 12.5 Gyr, as do data near the main sequence termination point of metal rich, high proper motion stars of low orbital eccentricity. Hence, a case can still be made that the oldest part of the galactic thick disk is similar in age to the halo globular clusters, if their ages are the same as 47 Tuc.
Repair of Chronic Tibialis Anterior Tendon Rupture With a Major Defect Using Gracilis Allograft.
Burton, Alex; Aydogan, Umur
2016-08-01
Tibialis anterior tendon (TAT) rupture is an uncommon injury, however, it can cause substantial deficit. Diagnosis is often delayed due to lack of initial symptoms; yet loss of function over time typically causes the patient to present for treatment. This delay usually ends up with major defects creating a great technical challenge for the operating surgeon. We present a novel technique and operative algorithm for the management of chronic TAT ruptures with a major gap after a delayed diagnosis not otherwise correctable with currently described techniques in the literature. This technique has been performed in 4 cases without any complications with fairly successful functional outcomes. For the reconstruction of chronic TAT rupture with an average delay of nine weeks after initial injury and gap of greater than 10 cm, a thorough operative algorithm was implemented in 4 patients using a double bundle gracilis allograft. Patients were then kept nonweightbearing for 6 weeks followed by weightbearing as tolerated. They began physical therapy with a focus on ankle exercises and gradual return to normal activity at 8 weeks, with resistance training exercises allowed at 12 weeks. At a mean follow-up time of 24.5 months, all patients reported significant pain relief with normal gait pattern. There were no reported intra- or postoperative complications. The average Foot and Ankle Ability Measure score increased to 90 from 27.5 in the postoperative period. All patients were able to return their previous activity levels. Gracilis allograft reconstruction as used in this study is a viable and reproducible alternative to primary repair with postoperative results being favorable without using complex tendon transfer techniques or autograft use necessitating the functional sacrifice of transferred or excised tendon. To the best of our knowledge, this is the first study demonstrating a successful technique and operative algorithm of gracilis allograft reconstruction of the TAT with a substantial deficit of greater than 10 cm with favorable results. Level IV: Operative algorithm with case series. © 2016 The Author(s).
Rigorous RG Algorithms and Area Laws for Low Energy Eigenstates in 1D
NASA Astrophysics Data System (ADS)
Arad, Itai; Landau, Zeph; Vazirani, Umesh; Vidick, Thomas
2017-11-01
One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance (Landau et al. in Nat Phys, 2015) gave a polynomial time algorithm to compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unsolved, including whether there exist efficient algorithms when the ground space is degenerate (and of polynomial dimension in the system size), or for the polynomially many lowest energy states, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm, based on a rigorously justified RG type transformation, for finding low energy states for 1D Hamiltonians acting on a chain of n particles. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly( n) degenerate ground spaces and an n O(log n) algorithm for the poly( n) lowest energy states (under a mild density condition). For these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is {\\tilde{O}(nM(n))} , where M( n) is the time required to multiply two n × n matrices.
Near optimum digital phase locked loops.
NASA Technical Reports Server (NTRS)
Polk, D. R.; Gupta, S. C.
1972-01-01
Near optimum digital phase locked loops are derived utilizing nonlinear estimation theory. Nonlinear approximations are employed to yield realizable loop structures. Baseband equivalent loop gains are derived which under high signal to noise ratio conditions may be calculated off-line. Additional simplifications are made which permit the application of the Kalman filter algorithms to determine the optimum loop filter. Performance is evaluated by a theoretical analysis and by simulation. Theoretical and simulated results are discussed and a comparison to analog results is made.
Retention time alignment of LC/MS data by a divide-and-conquer algorithm.
Zhang, Zhongqi
2012-04-01
Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.
A novel community detection method in bipartite networks
NASA Astrophysics Data System (ADS)
Zhou, Cangqi; Feng, Liang; Zhao, Qianchuan
2018-02-01
Community structure is a common and important feature in many complex networks, including bipartite networks, which are used as a standard model for many empirical networks comprised of two types of nodes. In this paper, we propose a two-stage method for detecting community structure in bipartite networks. Firstly, we extend the widely-used Louvain algorithm to bipartite networks. The effectiveness and efficiency of the Louvain algorithm have been proved by many applications. However, there lacks a Louvain-like algorithm specially modified for bipartite networks. Based on bipartite modularity, a measure that extends unipartite modularity and that quantifies the strength of partitions in bipartite networks, we fill the gap by developing the Bi-Louvain algorithm that iteratively groups the nodes in each part by turns. This algorithm in bipartite networks often produces a balanced network structure with equal numbers of two types of nodes. Secondly, for the balanced network yielded by the first algorithm, we use an agglomerative clustering method to further cluster the network. We demonstrate that the calculation of the gain of modularity of each aggregation, and the operation of joining two communities can be compactly calculated by matrix operations for all pairs of communities simultaneously. At last, a complete hierarchical community structure is unfolded. We apply our method to two benchmark data sets and a large-scale data set from an e-commerce company, showing that it effectively identifies community structure in bipartite networks.
Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.
Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K
2010-09-01
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.