A Centered Projective Algorithm for Linear Programming
1988-02-01
zx/l to (PA Karmarkar’s algorithm iterates this procedure. An alternative method, the so-called affine variant (first proposed by Dikin [6] in 1967...trajectories, II. Legendre transform coordinates . central trajectories," manuscripts, to appear in Transactions of the American [6] I.I. Dikin ...34Iterative solution of problems of linear and quadratic programming," Soviet Mathematics Dokladv 8 (1967), 674-675. [7] I.I. Dikin , "On the speed of an
Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)
NASA Technical Reports Server (NTRS)
Niewoehner, Kevin R.; Carter, John (Technical Monitor)
2001-01-01
The research accomplishments for the cooperative agreement 'Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)' include the following: (1) previous IFC program data collection and analysis; (2) IFC program support site (configured IFC systems support network, configured Tornado/VxWorks OS development system, made Configuration and Documentation Management Systems Internet accessible); (3) Airborne Research Test Systems (ARTS) II Hardware (developed hardware requirements specification, developing environmental testing requirements, hardware design, and hardware design development); (4) ARTS II software development laboratory unit (procurement of lab style hardware, configured lab style hardware, and designed interface module equivalent to ARTS II faceplate); (5) program support documentation (developed software development plan, configuration management plan, and software verification and validation plan); (6) LWR algorithm analysis (performed timing and profiling on algorithm); (7) pre-trained neural network analysis; (8) Dynamic Cell Structures (DCS) Neural Network Analysis (performing timing and profiling on algorithm); and (9) conducted technical interchange and quarterly meetings to define IFC research goals.
Plasmid mapping computer program.
Nolan, G P; Maina, C V; Szalay, A A
1984-01-01
Three new computer algorithms are described which rapidly order the restriction fragments of a plasmid DNA which has been cleaved with two restriction endonucleases in single and double digestions. Two of the algorithms are contained within a single computer program (called MPCIRC). The Rule-Oriented algorithm, constructs all logical circular map solutions within sixty seconds (14 double-digestion fragments) when used in conjunction with the Permutation method. The program is written in Apple Pascal and runs on an Apple II Plus Microcomputer with 64K of memory. A third algorithm is described which rapidly maps double digests and uses the above two algorithms as adducts. Modifications of the algorithms for linear mapping are also presented. PMID:6320105
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grant, C W; Lenderman, J S; Gansemer, J D
This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed bymore » Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).« less
Overview of implementation of DARPA GPU program in SAIC
NASA Astrophysics Data System (ADS)
Braunreiter, Dennis; Furtek, Jeremy; Chen, Hai-Wen; Healy, Dennis
2008-04-01
This paper reviews the implementation of DARPA MTO STAP-BOY program for both Phase I and II conducted at Science Applications International Corporation (SAIC). The STAP-BOY program conducts fast covariance factorization and tuning techniques for space-time adaptive process (STAP) Algorithm Implementation on Graphics Processor unit (GPU) Architectures for Embedded Systems. The first part of our presentation on the DARPA STAP-BOY program will focus on GPU implementation and algorithm innovations for a prototype radar STAP algorithm. The STAP algorithm will be implemented on the GPU, using stream programming (from companies such as PeakStream, ATI Technologies' CTM, and NVIDIA) and traditional graphics APIs. This algorithm will include fast range adaptive STAP weight updates and beamforming applications, each of which has been modified to exploit the parallel nature of graphics architectures.
Algorithmic support for graphic images rotation in avionics
NASA Astrophysics Data System (ADS)
Kniga, E. V.; Gurjanov, A. V.; Shukalov, A. V.; Zharinov, I. O.
2018-05-01
The avionics device designing has an actual problem of development and research algorithms to rotate the images which are being shown in the on-board display. The image rotation algorithms are a part of program software of avionics devices, which are parts of the on-board computers of the airplanes and helicopters. Images to be rotated have the flight location map fragments. The image rotation in the display system can be done as a part of software or mechanically. The program option is worse than the mechanic one in its rotation speed. The comparison of some test images of rotation several algorithms is shown which are being realized mechanically with the program environment Altera QuartusII.
Nonlinear 0-1 Programming: II. Dominance Relations and Algorithms. Revision.
1983-02-01
Polynomial Programming," Management Science, 18B, 1972, p. 328-343. [22] W. Zangwill, "Media Selection by Decision Programming," Journal of Advertising Research , 5, 1965, p. 30-36. -- 4 * ,-; ;- ...-. .*. .- * % t T. P.9 . *FILMED 4-85 DTIC
Image compression evaluation for digital cinema: the case of Star Wars: Episode II
NASA Astrophysics Data System (ADS)
Schnuelle, David L.
2003-05-01
A program of evaluation of compression algorithms proposed for use in a digital cinema application is described and the results presented in general form. The work was intended to aid in the selection of a compression system to be used for the digital cinema release of Star Wars: Episode II, in May 2002. An additional goal was to provide feedback to the algorithm proponents on what parameters and performance levels the feature film industry is looking for in digital cinema compression. The primary conclusion of the test program is that any of the current digital cinema compression proponents will work for digital cinema distribution to today's theaters.
Inversion of oceanic constituents in case I and II waters with genetic programming algorithms.
Chami, Malik; Robilliard, Denis
2002-10-20
A stochastic inverse technique based on agenetic programming (GP) algorithm was developed toinvert oceanic constituents from simulated data for case I and case II water applications. The simulations were carried out with the Ordre Successifs Ocean Atmosphere (OSOA) radiative transfer model. They include the effects of oceanic substances such as algal-related chlorophyll, nonchlorophyllous suspended matter, and dissolved organic matter. The synthetic data set also takes into account the directional effects of particles through a variation of their phase function that makes the simulated data realistic. It is shown that GP can be successfully applied to the inverse problem with acceptable stability in the presence of realistic noise in the data. GP is compared with neural network methodology for case I waters; GP exhibits similar retrieval accuracy, which is greater than for traditional techniques such as band ratio algorithms. The application of GP to real satellite data [a Sea-viewing Wide Field-of-view Sensor (SeaWiFS)] was also carried out for case I waters as a validation. Good agreement was obtained when GP results were compared with the SeaWiFS empirical algorithm. For case II waters the accuracy of GP is less than 33%, which remains satisfactory, at the present time, for remote-sensing purposes.
Experiences in using the CYBER 203 for three-dimensional transonic flow calculations
NASA Technical Reports Server (NTRS)
Melson, N. D.; Keller, J. D.
1982-01-01
In this paper, the authors report on some of their experiences modifying two three-dimensional transonic flow programs (FLO22 and FLO27) for use on the NASA Langley Research Center CYBER 203. Both of the programs discussed were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine, including: (1) leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, (2) vectorizing parts of the existing algorithm in the program, and (3) incorporating a new vectorizable algorithm (ZEBRA I or ZEBRA II) in the program.
2012-03-22
with performance profiles, Math. Program., 91 (2002), pp. 201–213. [6] P. DRINEAS, R. KANNAN, AND M. W. MAHONEY , Fast Monte Carlo algorithms for matrices...computing invariant subspaces of non-Hermitian matri- ces, Numer. Math., 25 ( 1975 /76), pp. 123–136. [25] , Matrix algorithms Vol. II: Eigensystems
Use of CYBER 203 and CYBER 205 computers for three-dimensional transonic flow calculations
NASA Technical Reports Server (NTRS)
Melson, N. D.; Keller, J. D.
1983-01-01
Experiences are discussed for modifying two three-dimensional transonic flow computer programs (FLO 22 and FLO 27) for use on the CDC CYBER 203 computer system. Both programs were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine: leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, vectorizing parts of the existing algorithm in the program, and incorporating a vectorizable algorithm (ZEBRA I or ZEBRA II) in the program. Comparison runs of the programs were made on CDC CYBER 175. CYBER 203, and two pipe CDC CYBER 205 computer systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, J T
1975-06-01
A summary of work during the past several years on SETL, a new programming language drawing its dictions and basic concepts from the mathematical theory of sets, is presented. The work was started with the idea that a programming language modeled after an appropriate version of the formal language of mathematics might allow a programming style with some of the succinctness of mathematics, and that this might ultimately enable one to express and experiment with more complex algorithms than are now within reach. Part I discusses the general approach followed in the work. Part II focuses directly on the detailsmore » of the SETL language as it is now defined. It describes the facilities of SETL, includes short libraries of miscellaneous and of code optimization algorithms illustrating the use of SETL, and gives a detailed description of the manner in which the set-theoretic primitives provided by SETL are currently implemented. (RWR)« less
Multi-objective optimal design of sandwich panels using a genetic algorithm
NASA Astrophysics Data System (ADS)
Xu, Xiaomei; Jiang, Yiping; Pueh Lee, Heow
2017-10-01
In this study, an optimization problem concerning sandwich panels is investigated by simultaneously considering the two objectives of minimizing the panel mass and maximizing the sound insulation performance. First of all, the acoustic model of sandwich panels is discussed, which provides a foundation to model the acoustic objective function. Then the optimization problem is formulated as a bi-objective programming model, and a solution algorithm based on the non-dominated sorting genetic algorithm II (NSGA-II) is provided to solve the proposed model. Finally, taking an example of a sandwich panel that is expected to be used as an automotive roof panel, numerical experiments are carried out to verify the effectiveness of the proposed model and solution algorithm. Numerical results demonstrate in detail how the core material, geometric constraints and mechanical constraints impact the optimal designs of sandwich panels.
NASA Astrophysics Data System (ADS)
Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan
2017-05-01
In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability.
Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan
2017-05-01
In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability. Copyright © 2017 Elsevier B.V. All rights reserved.
A spline-based approach for computing spatial impulse responses.
Ellis, Michael A; Guenther, Drake; Walker, William F
2007-05-01
Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.
NAVSIM 2: A computer program for simulating aided-inertial navigation for aircraft
NASA Technical Reports Server (NTRS)
Bjorkman, William S.
1987-01-01
NAVSIM II, a computer program for analytical simulation of aided-inertial navigation for aircraft, is described. The description is supported by a discussion of the program's application to the design and analysis of aided-inertial navigation systems as well as instructions for utilizing the program and for modifying it to accommodate new models, constraints, algorithms and scenarios. NAVSIM II simulates an airborne inertial navigation system built around a strapped-down inertial measurement unit and aided in its function by GPS, Doppler radar, altimeter, airspeed, and position-fix measurements. The measurements are incorporated into the navigation estimate via a UD-form Kalman filter. The simulation was designed and implemented using structured programming techniques and with particular attention to user-friendly operation.
Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron
2008-01-01
In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.
40 CFR 86.1809-12 - Prohibition of defeat devices.
Code of Federal Regulations, 2014 CFR
2014-07-01
... manufacturer must provide an explanation containing detailed information regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies..., with the Part II certification application, an engineering evaluation demonstrating to the satisfaction...
A microcomputer program for analysis of nucleic acid hybridization data
Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.
1982-01-01
The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017
Kudo, Kohsuke; Uwano, Ikuko; Hirai, Toshinori; Murakami, Ryuji; Nakamura, Hideo; Fujima, Noriyuki; Yamashita, Fumio; Goodwin, Jonathan; Higuchi, Satomi; Sasaki, Makoto
2017-04-10
The purpose of the present study was to compare different software algorithms for processing DSC perfusion images of cerebral tumors with respect to i) the relative CBV (rCBV) calculated, ii) the cutoff value for discriminating low- and high-grade gliomas, and iii) the diagnostic performance for differentiating these tumors. Following approval of institutional review board, informed consent was obtained from all patients. Thirty-five patients with primary glioma (grade II, 9; grade III, 8; and grade IV, 18 patients) were included. DSC perfusion imaging was performed with 3-Tesla MRI scanner. CBV maps were generated by using 11 different algorithms of four commercially available software and one academic program. rCBV of each tumor compared to normal white matter was calculated by ROI measurements. Differences in rCBV value were compared between algorithms for each tumor grade. Receiver operator characteristics analysis was conducted for the evaluation of diagnostic performance of different algorithms for differentiating between different grades. Several algorithms showed significant differences in rCBV, especially for grade IV tumors. When differentiating between low- (II) and high-grade (III/IV) tumors, the area under the ROC curve (Az) was similar (range 0.85-0.87), and there were no significant differences in Az between any pair of algorithms. In contrast, the optimal cutoff values varied between algorithms (range 4.18-6.53). rCBV values of tumor and cutoff values for discriminating low- and high-grade gliomas differed between software packages, suggesting that optimal software-specific cutoff values should be used for diagnosis of high-grade gliomas.
40 CFR 86.1809-10 - Prohibition of defeat devices.
Code of Federal Regulations, 2012 CFR
2012-07-01
... detailed information regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies incorporated for operation both during and... HLDT/MDPVs the manufacturer must submit, with the Part II certification application, an engineering...
40 CFR 86.1809-12 - Prohibition of defeat devices.
Code of Federal Regulations, 2012 CFR
2012-07-01
... programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and... manufacturer must submit, with the Part II certification application, an engineering evaluation demonstrating... vehicles, the engineering evaluation must also include particulate emissions. [75 FR 25685, May 7, 2010] ...
40 CFR 86.1809-10 - Prohibition of defeat devices.
Code of Federal Regulations, 2014 CFR
2014-07-01
... programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and..., with the Part II certification application, an engineering evaluation demonstrating to the satisfaction... not occur in the temperature range of 20 to 86 °F. For diesel vehicles, the engineering evaluation...
1988-12-01
ol ) V. CONCLUSION AND DISCUSSION......................... ... 6 APPENDIX A. NPGS ROUTER USER GUIDE........................6 APPENDIX B. C PROGRAM...problem and shows some of the terminology. previously mentioned. that is peculiar to VISI routing. Clq C4 C4 C4 -4 C4 Clq -4- o C CCD Co -4 q 04 -4 oL ...34 l II 92-. -.-- -.-- , -.... -4--*- -*-- tC I I + 62- -- - ----- -. .t -- +* 0C ’l i II I o -- - ..... 4+ - -+- j- - --- +-+-g9! 6 Ol ... ... "II g4
NASA Astrophysics Data System (ADS)
Coudarcher, Rémi; Duculty, Florent; Serot, Jocelyn; Jurie, Frédéric; Derutin, Jean-Pierre; Dhome, Michel
2005-12-01
SKiPPER is a SKeleton-based Parallel Programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2011 CFR
2011-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2012 CFR
2012-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2014 CFR
2014-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2010 CFR
2010-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...
Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L
2016-07-15
Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.
An efficient non-dominated sorting method for evolutionary algorithms.
Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F
2008-01-01
We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.
An efficient hybrid approach for multiobjective optimization of water distribution systems
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.
2014-05-01
An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.
Can Linear Superiorization Be Useful for Linear Optimization Problems?
Censor, Yair
2017-01-01
Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660
Can linear superiorization be useful for linear optimization problems?
NASA Astrophysics Data System (ADS)
Censor, Yair
2017-04-01
Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2013 CFR
2013-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...
Multi-Objective Constraint Satisfaction for Mobile Robot Area Defense
2010-03-01
17 NSGA-II non-dominated sorting genetic algorithm II . . . . . . . . . . . . . . . . . . . 17 jMetal Metaheuristic Algorithms in...to alert the other agents and ensure trust in the system. This research presents an algorithm that tasks robots to meet the two specific goals of...problem is defined as a constraint satisfaction problem solved using the Non-dominated Sorting Genetic Algorithm II (NSGA-II). Both goals of
Implementing a Multiple Criteria Model Base in Co-Op with a Graphical User Interface Generator
1993-09-23
PROMETHEE ................................ 44 A. THE ALGORITHM S ................................... 44 1. Basic Algorithm of PROMETHEE I and... PROMETHEE II ..... 45 a. Use of the Algorithm in PROMETHEE I ............. 49 b. Use of the Algorithm in PROMETHEE II ............. 50 V 2. Algorithm of... PROMETHEE V ......................... 50 B. SCREEN DESIGNS OF PROMETHEE ...................... 51 1. PROMETHEE I and PROMETHEE II ................... 52 a
Implementing the Continued Fraction Algorithm on the Illiac IV.
1980-01-01
Illinois University in 1975. Originally, the program was only capable of factoring numbers up to 30 decimal digits in length, but a number of improvements...Wunderlich F49620-79-C-O199 (, ) Mathematical Sciences Dept Northern ILlinois University _ 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM...ELEMENT. PROJECT, TASK AREA & WORK UNIT NUMBERS L rthern Illinois University 61102F 2304/A6 Mathematical Sciences Dept. DeKalb, IL 60115 II. CONTROLLING
West, Caroline; Ploth, David; Fonner, Virginia; Mbwambo, Jessie; Fredrick, Francis; Sweat, Michael
2016-04-01
Noncommunicable diseases are on pace to outnumber infectious disease as the leading cause of death in sub-Saharan Africa, yet many questions remain unanswered with concern toward effective methods of screening for type II diabetes mellitus (DM) in this resource-limited setting. We aim to design a screening algorithm for type II DM that optimizes sensitivity and specificity of identifying individuals with undiagnosed DM, as well as affordability to health systems and individuals. Baseline demographic and clinical data, including hemoglobin A1c (HbA1c), were collected from 713 participants using probability sampling of the general population. We used these data, along with model parameters obtained from the literature, to mathematically model 8 purposed DM screening algorithms, while optimizing the sensitivity and specificity using Monte Carlo and Latin Hypercube simulation. An algorithm that combines risk assessment and measurement of fasting blood glucose was found to be superior for the most resource-limited settings (sensitivity 68%, sensitivity 99% and cost per patient having DM identified as $2.94). Incorporating HbA1c testing improves the sensitivity to 75.62%, but raises the cost per DM case identified to $6.04. The preferred algorithms are heavily biased to diagnose those with more severe cases of DM. Using basic risk assessment tools and fasting blood sugar testing in lieu of HbA1c testing in resource-limited settings could allow for significantly more feasible DM screening programs with reasonable sensitivity and specificity. Copyright © 2016 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad
2018-05-01
The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.
2017-02-19
software systems: the students design and build robotics software towards real-world applications, without being distracted by hardware issues; (ii) it...high school students require the students to focus on building and integrating the hardware that make up the robot, at the expense of designing and...robotics programs focus on the mechanics; as a result, they do not have room for students to design and implement relatively complex software systems, as
Fuzzy multiobjective models for optimal operation of a hydropower system
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.
2013-06-01
Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.
NASA Astrophysics Data System (ADS)
Ehmann, Andreas F.; Downie, J. Stephen
2005-09-01
The objective of the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) project is the creation of a large, secure corpus of audio and symbolic music data accessible to the music information retrieval (MIR) community for the testing and evaluation of various MIR techniques. As part of the IMIRSEL project, a cross-platform JAVA based visual programming environment called Music to Knowledge (M2K) is being developed for a variety of music information retrieval related tasks. The primary objective of M2K is to supply the MIR community with a toolset that provides the ability to rapidly prototype algorithms, as well as foster the sharing of techniques within the MIR community through the use of a standardized set of tools. Due to the relatively large size of audio data and the computational costs associated with some digital signal processing and machine learning techniques, M2K is also designed to support distributed computing across computing clusters. In addition, facilities to allow the integration of non-JAVA based (e.g., C/C++, MATLAB, etc.) algorithms and programs are provided within M2K. [Work supported by the Andrew W. Mellon Foundation and NSF Grants No. IIS-0340597 and No. IIS-0327371.
Java-based Graphical User Interface for MAVERIC-II
NASA Technical Reports Server (NTRS)
Seo, Suk Jai
2005-01-01
A computer program entitled "Marshall Aerospace Vehicle Representation in C II, (MAVERIC-II)" is a vehicle flight simulation program written primarily in the C programming language. It is written by James W. McCarter at NASA/Marshall Space Flight Center. The goal of the MAVERIC-II development effort is to provide a simulation tool that facilitates the rapid development of high-fidelity flight simulations for launch, orbital, and reentry vehicles of any user-defined configuration for all phases of flight. MAVERIC-II has been found invaluable in performing flight simulations for various Space Transportation Systems. The flexibility provided by MAVERIC-II has allowed several different launch vehicles, including the Saturn V, a Space Launch Initiative Two-Stage-to-Orbit concept and a Shuttle-derived launch vehicle, to be simulated during ascent and portions of on-orbit flight in an extremely efficient manner. It was found that MAVERIC-II provided the high fidelity vehicle and flight environment models as well as the program modularity to allow efficient integration, modification and testing of advanced guidance and control algorithms. In addition to serving as an analysis tool for techno logy development, many researchers have found MAVERIC-II to be an efficient, powerful analysis tool that evaluates guidance, navigation, and control designs, vehicle robustness, and requirements. MAVERIC-II is currently designed to execute in a UNIX environment. The input to the program is composed of three segments: 1) the vehicle models such as propulsion, aerodynamics, and guidance, navigation, and control 2) the environment models such as atmosphere and gravity, and 3) a simulation framework which is responsible for executing the vehicle and environment models and propagating the vehicle s states forward in time and handling user input/output. MAVERIC users prepare data files for the above models and run the simulation program. They can see the output on screen and/or store in files and examine the output data later. Users can also view the output stored in output files by calling a plotting program such as gnuplot. A typical scenario of the use of MAVERIC consists of three-steps; editing existing input data files, running MAVERIC, and plotting output results.
ASR-9 processor augmentation card (9-PAC) phase II scan-scan correlator algorithms
DOT National Transportation Integrated Search
2001-04-26
The report documents the scan-scan correlator (tracker) algorithm developed for Phase II of the ASR-9 Processor Augmentation Card (9-PAC) project. The improved correlation and tracking algorithms in 9-PAC Phase II decrease the incidence of false-alar...
1982-11-01
algorithm for turning-region boundary value problem -70- d. Program control parameters: ALPHA (Qq) max’ maximum value of Qq in present coding. BETA, BLOSS...Parameters available for either system descrip- tion or program control . (These parameters are currently unused, so they are set equal to zero.) IGUESS...Parameter that controls the initial choices of first-shoot values along y = 0. IGUESS = 1: Discretized versions of P(r, 0), T(r, 0), and u(r, 0) must
Hansoti, Bhakti; Jenson, Alexander; Kironji, Antony G; Katz, Joanne; Levin, Scott; Rothman, Richard; Kelen, Gabor D; Wallis, Lee A
2017-01-01
In low resource settings, an inadequate number of trained healthcare workers and high volumes of children presenting to Primary Healthcare Centers (PHC) result in prolonged waiting times and significant delays in identifying and evaluating critically ill children. The Sick Children Require Emergency Evaluation Now (SCREEN) program, a simple six-question screening algorithm administered by lay healthcare workers, was developed in 2014 to rapidly identify critically ill children and to expedite their care at the point of entry into a clinic. We sought to determine the impact of SCREEN on waiting times for critically ill children post real world implementation in Cape Town, South Africa. This is a prospective, observational implementation-effectiveness hybrid study that sought to determine: (1) the impact of SCREEN implementation on waiting times as a primary outcome measure, and (2) the effectiveness of the SCREEN tool in accurately identifying critically ill children when utilised by the QM and adherence by the QM to the SCREEN algorithm as secondary outcome measures. The study was conducted in two phases, Phase I control (pre-SCREEN implementation- three months in 2014) and Phase II (post-SCREEN implementation-two distinct three month periods in 2016). In Phase I, 1600 (92.38%) of 1732 children presenting to 4 clinics, had sufficient data for analysis and comprised the control sample. In Phase II, all 3383 of the children presenting to the 26 clinics during the sampling time frame had sufficient data for analysis. The proportion of critically ill children who saw a professional nurse within 10 minutes increased tenfold from 6.4% to 64% (Phase I to Phase II) with the median time to seeing a professional nurse reduced from 100.3 minutes to 4.9 minutes, (p < .001, respectively). Overall layperson screening compared to Integrated Management of Childhood Illnesses (IMCI) designation by a nurse had a sensitivity of 94.2% and a specificity of 88.1%, despite large variance in adherence to the SCREEN algorithm across clinics. The SCREEN program when implemented in a real-world setting can significantly reduce waiting times for critically ill children in PHCs, however further work is required to improve the implementation of this innovative program.
Installation of automatic control at experimental breeder reactor II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, H.A.; Booty, W.F.; Chick, D.R.
1985-08-01
The Experimental Breeder Reactor II (EBR-II) has been modified to permit automatic control capability. Necessary mechanical and electrical changes were made on a regular control rod position; motor, gears, and controller were replaced. A digital computer system was installed that has the programming capability for varied power profiles. The modifications permit transient testing at EBR-II. Experiments were run that increased power linearly as much as 4 MW/s (16% of initial power of 25 MW(thermal)/s), held power constant, and decreased power at a rate no slower than the increase rate. Thus the performance of the automatic control algorithm, the mechanical andmore » electrical control equipment, and the qualifications of the driver fuel for future power change experiments were all demonstrated.« less
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Feigl, Guenther C; Hiergeist, Wolfgang; Fellner, Claudia; Schebesch, Karl-Michael M; Doenitz, Christian; Finkenzeller, Thomas; Brawanski, Alexander; Schlaier, Juergen
2014-01-01
Diffusion tensor imaging (DTI)-based tractography has become an integral part of preoperative diagnostic imaging in many neurosurgical centers, and other nonsurgical specialties depend increasingly on DTI tractography as a diagnostic tool. The aim of this study was to analyze the anatomic accuracy of visualized white matter fiber pathways using different, readily available DTI tractography software programs. Magnetic resonance imaging scans of the head of 20 healthy volunteers were acquired using a Siemens Symphony TIM 1.5T scanner and a 12-channel head array coil. The standard settings of the scans in this study were 12 diffusion directions and 5-mm slices. The fornices were chosen as an anatomic structure for the comparative fiber tracking. Identical data sets were loaded into nine different fiber tracking packages that used different algorithms. The nine software packages and algorithms used were NeuroQLab (modified tensor deflection [TEND] algorithm), Sörensen DTI task card (modified streamline tracking technique algorithm), Siemens DTI module (modified fourth-order Runge-Kutta algorithm), six different software packages from Trackvis (interpolated streamline algorithm, modified FACT algorithm, second-order Runge-Kutta algorithm, Q-ball [FACT algorithm], tensorline algorithm, Q-ball [second-order Runge-Kutta algorithm]), DTI Query (modified streamline tracking technique algorithm), Medinria (modified TEND algorithm), Brainvoyager (modified TEND algorithm), DTI Studio modified FACT algorithm, and the BrainLab DTI module based on the modified Runge-Kutta algorithm. Three examiners (a neuroradiologist, a magnetic resonance imaging physicist, and a neurosurgeon) served as examiners. They were double-blinded with respect to the test subject and the fiber tracking software used in the presented images. Each examiner evaluated 301 images. The examiners were instructed to evaluate screenshots from the different programs based on two main criteria: (i) anatomic accuracy of the course of the displayed fibers and (ii) number of fibers displayed outside the anatomic boundaries. The mean overall grade for anatomic accuracy was 2.2 (range, 1.1-3.6) with a standard deviation (SD) of 0.9. The mean overall grade for incorrectly displayed fibers was 2.5 (range, 1.6-3.5) with a SD of 0.6. The mean grade of the overall program ranking was 2.3 with a SD of 0.6. The overall mean grade of the program ranked number one (NeuroQLab) was 1.7 (range, 1.5-2.8). The mean overall grade of the program ranked last (BrainLab iPlan Cranial 2.6 DTI Module) was 3.3 (range, 1.7-4). The difference between the mean grades of these two programs was statistically highly significant (P < 0.0001). There was no statistically significant difference between the programs ranked 1-3: NeuroQLab, Sörensen DTI Task Card, and Siemens DTI module. The results of this study show that there is a statistically significant difference in the anatomic accuracy of the tested DTI fiber tracking programs. Although incorrectly displayed fibers could lead to wrong conclusions in the neurosciences field, which relies heavily on this noninvasive imaging technique, incorrectly displayed fibers in neurosurgery could lead to surgical decisions potentially harmful for the patient if used without intraoperative cortical stimulation. DTI fiber tracking presents a valuable noninvasive preoperative imaging tool, which requires further validation after important standardization of the acquisition and processing techniques currently available. Copyright © 2014 Elsevier Inc. All rights reserved.
Theory of post-block 2 VLBI observable extraction
NASA Technical Reports Server (NTRS)
Lowe, Stephen T.
1992-01-01
The algorithms used in the post-Block II fringe-fitting software called 'Fit' are described. The steps needed to derive the very long baseline interferometry (VLBI) charged-particle corrected group delay, phase delay rate, and phase delay (the latter without resolving cycle ambiguities) are presented beginning with the set of complex fringe phasors as a function of observation frequency and time. The set of complex phasors is obtained from the JPL/CIT Block II correlator. The output of Fit is the set of charged-particle corrected observables (along with ancillary information) in a form amenable to the software program 'Modest.'
NASA Technical Reports Server (NTRS)
Chu, W. P.; Chiou, E. W.; Larsen, J. C.; Thomason, L. W.; Rind, D.; Buglia, J. J.; Oltmans, S.; Mccormick, M. P.; Mcmaster, L. M.
1993-01-01
The operational inversion algorithm used for the retrieval of the water-vapor vertical profiles from the Stratospheric Aerosol and Gas Experiment II (SAGE II) occultation data is presented. Unlike the algorithm used for the retrieval of aerosol, O3, and NO2, the water-vapor retrieval algorithm accounts for the nonlinear relationship between the concentration versus the broad-band absorption characteristics of water vapor. Problems related to the accuracy of the computational scheme, the accuracy of the removal of other interfering species, and the expected uncertainty of the retrieved profile are examined. Results are presented on the error analysis of the SAGE II water vapor retrieval, indicating that the SAGE II instrument produced good quality water vapor data.
A Mixed-Integer Linear Programming Problem which is Efficiently Solvable.
1987-10-01
INTEGER LINEAR PROGRAMMING PROBLEM WHICH IS EFFICIENTLY SOLVABLE 12. PERSONAL AUTHOR(S) Leiserson, Charles, and Saxe, James B. 13a. TYPE OF REPORT j13b TIME...ger prongramn rg versions or the problem is not ac’hievable in genieral for sparse inistancves of’ P rolem(r Mi. Th le remrai nder or thris paper is...rClazes c:oIh edge (i,I*) by comlpli urg +- rnirr(z 3, ,x + a,j). A sirnI) le analysis (11 vto Nei [131 indicates why whe Iellinan-Ford algorithm works
NASA Technical Reports Server (NTRS)
Davy, W. C.; Green, M. J.; Lombard, C. K.
1981-01-01
The factored-implicit, gas-dynamic algorithm has been adapted to the numerical simulation of equilibrium reactive flows. Changes required in the perfect gas version of the algorithm are developed, and the method of coupling gas-dynamic and chemistry variables is discussed. A flow-field solution that approximates a Jovian entry case was obtained by this method and compared with the same solution obtained by HYVIS, a computer program much used for the study of planetary entry. Comparison of surface pressure distribution and stagnation line shock-layer profiles indicates that the two solutions agree well.
Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N , in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.
Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687
NASA Astrophysics Data System (ADS)
Lee, Kwon-Ho; Kim, Wonkook
2017-04-01
The geostationary ocean color imager-II (GOCI-II), designed to be focused on the ocean environmental monitoring with better spatial (250m for local and 1km for full disk) and spectral resolution (13 bands) then the current operational mission of the GOCI-I. GOCI-II will be launched in 2018. This study presents currently developing algorithm for atmospheric correction and retrieval of surface reflectance over land to be optimized with the sensor's characteristics. We first derived the top-of-atmosphere radiances as the proxy data derived from the parameterized radiative transfer code in the 13 bands of GOCI-II. Based on the proxy data, the algorithm has been made with cloud masking, gas absorption correction, aerosol inversion, computation of aerosol extinction correction. The retrieved surface reflectances are evaluated by the MODIS level 2 surface reflectance products (MOD09). For the initial test period, the algorithm gave error of within 0.05 compared to MOD09. Further work will be progressed to fully implement the GOCI-II Ground Segment system (G2GS) algorithm development environment. These atmospherically corrected surface reflectance product will be the standard GOCI-II product after launch.
Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.; ...
2012-01-01
A template-based generic programming approach was presented in Part I of this series of papers [Sci. Program. 20 (2012), 197–219] that separates the development effort of programming a physical model from that of computing additional quantities, such as derivatives, needed for embedded analysis algorithms. In this paper, we describe the implementation details for using the template-based generic programming approach for simulation and analysis of partial differential equations (PDEs). We detail several of the hurdles that we have encountered, and some of the software infrastructure developed to overcome them. We end with a demonstration where we present shape optimization and uncertaintymore » quantification results for a 3D PDE application.« less
Solar Occultation Retrieval Algorithm Development
NASA Technical Reports Server (NTRS)
Lumpe, Jerry D.
2004-01-01
This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.
ATHENA, ARTEMIS, HEPHAESTUS: data analysis for X-ray absorption spectropscopy using IFEFFIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravel, B.; Newville, M.; UC)
2010-07-20
A software package for the analysis of X-ray absorption spectroscopy (XAS) data is presented. This package is based on the IFEFFIT library of numerical and XAS algorithms and is written in the Perl programming language using the Perl/Tk graphics toolkit. The programs described here are: (i) ATHENA, a program for XAS data processing, (ii) ARTEMIS, a program for EXAFS data analysis using theoretical standards from FEFF and (iii) HEPHAESTUS, a collection of beamline utilities based on tables of atomic absorption data. These programs enable high-quality data analysis that is accessible to novices while still powerful enough to meet the demandsmore » of an expert practitioner. The programs run on all major computer platforms and are freely available under the terms of a free software license.« less
Path planning on cellular nonlinear network using active wave computing technique
NASA Astrophysics Data System (ADS)
Yeniçeri, Ramazan; Yalçın, Müstak E.
2009-05-01
This paper introduces a simple algorithm to solve robot path finding problem using active wave computing techniques. A two-dimensional Cellular Neural/Nonlinear Network (CNN), consist of relaxation oscillators, has been used to generate active waves and to process the visual information. The network, which has been implemented on a Field Programmable Gate Array (FPGA) chip, has the feature of being programmed, controlled and observed by a host computer. The arena of the robot is modelled as the medium of the active waves on the network. Active waves are employed to cover the whole medium with their own dynamics, by starting from an initial point. The proposed algorithm is achieved by observing the motion of the wave-front of the active waves. Host program first loads the arena model onto the active wave generator network and command to start the generation. Then periodically pulls the network image from the generator hardware to analyze evolution of the active waves. When the algorithm is completed, vectorial data image is generated. The path from any of the pixel on this image to the active wave generating pixel is drawn by the vectors on this image. The robot arena may be a complicated labyrinth or may have a simple geometry. But, the arena surface always must be flat. Our Autowave Generator CNN implementation which is settled on the Xilinx University Program Virtex-II Pro Development System is operated by a MATLAB program running on the host computer. As the active wave generator hardware has 16, 384 neurons, an arena with 128 × 128 pixels can be modeled and solved by the algorithm. The system also has a monitor and network image is depicted on the monitor simultaneously.
Scalable Multiplexed Ion Trap (SMIT) Program
2010-12-08
an integrated micromirror . The symmetric cross and the mirror trap had a number of complex design features. Both traps shaped the electrodes in...genetic algorithm. 6. Integrated micromirror . The Gen II linear trap (as well as the linear sections of the mirror and the cross) had a number of new...conventional imaging system constructed by off-the-shelf optical components and a micromirror located very close to the ion. A large fraction of photons
Improving Strategies via SMT Solving
NASA Astrophysics Data System (ADS)
Gawlitza, Thomas Martin; Monniaux, David
We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of Gawlitza and Seidl [17]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Π2 P -complete.
Expert-guided evolutionary algorithm for layout design of complex space stations
NASA Astrophysics Data System (ADS)
Qian, Zhiqin; Bi, Zhuming; Cao, Qun; Ju, Weiguo; Teng, Hongfei; Zheng, Yang; Zheng, Siyu
2017-08-01
The layout of a space station should be designed in such a way that different equipment and instruments are placed for the station as a whole to achieve the best overall performance. The station layout design is a typical nondeterministic polynomial problem. In particular, how to manage the design complexity to achieve an acceptable solution within a reasonable timeframe poses a great challenge. In this article, a new evolutionary algorithm has been proposed to meet such a challenge. It is called as the expert-guided evolutionary algorithm with a tree-like structure decomposition (EGEA-TSD). Two innovations in EGEA-TSD are (i) to deal with the design complexity, the entire design space is divided into subspaces with a tree-like structure; it reduces the computation and facilitates experts' involvement in the solving process. (ii) A human-intervention interface is developed to allow experts' involvement in avoiding local optimums and accelerating convergence. To validate the proposed algorithm, the layout design of one-space station is formulated as a multi-disciplinary design problem, the developed algorithm is programmed and executed, and the result is compared with those from other two algorithms; it has illustrated the superior performance of the proposed EGEA-TSD.
NASA Astrophysics Data System (ADS)
Wang, Hongfeng; Fu, Yaping; Huang, Min; Wang, Junwei
2016-03-01
The operation process design is one of the key issues in the manufacturing and service sectors. As a typical operation process, the scheduling with consideration of the deteriorating effect has been widely studied; however, the current literature only studied single function requirement and rarely considered the multiple function requirements which are critical for a real-world scheduling process. In this article, two function requirements are involved in the design of a scheduling process with consideration of the deteriorating effect and then formulated into two objectives of a mathematical programming model. A novel multiobjective evolutionary algorithm is proposed to solve this model with combination of three strategies, i.e. a multiple population scheme, a rule-based local search method and an elitist preserve strategy. To validate the proposed model and algorithm, a series of randomly-generated instances are tested and the experimental results indicate that the model is effective and the proposed algorithm can achieve the satisfactory performance which outperforms the other state-of-the-art multiobjective evolutionary algorithms, such as nondominated sorting genetic algorithm II and multiobjective evolutionary algorithm based on decomposition, on all the test instances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrisochoides, N.; Sukup, F.
In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less
Statistical Inference in Hidden Markov Models Using k-Segment Constraints
Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher
2016-01-01
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674
NASA Astrophysics Data System (ADS)
Ghezavati, V. R.; Beigi, M.
2016-12-01
During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.
Filho, Herton Luiz Alves Sales; da Mata Sousa, Luiz Claudio Demes; von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; dos Santos Neto, Pedro de Alcântara; do Nascimento, Ferraz; de Castro, Adail Fonseca; do Nascimento, Liliane Machado; Kneib, Carolina; Bianchi Cazarote, Helena; Mayumi Kitamura, Daniele; Torres, Juliane Roberta Dias; da Cruz Lopes, Laiane; Barros, Aryela Loureiro; da Silva Edlin, Evelin Nildiane; de Moura, Fernanda Sá Leal; Watanabe, Janine Midori Figueiredo; do Monte, Semiramis Jamil Hadad
2012-06-01
The HLAMatchmaker algorithm, which allows the identification of “safe” acceptable mismatches (AMMs) for recipients of solid organ and cell allografts, is rarely used in part due to the difficulty in using it in the current Excel format. The automation of this algorithm may universalize its use to benefit the allocation of allografts. Recently, we have developed a new software called EpHLA, which is the first computer program automating the use of the HLAMatchmaker algorithm. Herein, we present the experimental validation of the EpHLA program by showing the time efficiency and the quality of operation. The same results, obtained by a single antigen bead assay with sera from 10 sensitized patients waiting for kidney transplants, were analyzed either by conventional HLAMatchmaker or by automated EpHLA method. Users testing these two methods were asked to record: (i) time required for completion of the analysis (in minutes); (ii) number of eplets obtained for class I and class II HLA molecules; (iii) categorization of eplets as reactive or non-reactive based on the MFI cutoff value; and (iv) determination of AMMs based on eplets' reactivities. We showed that although both methods had similar accuracy, the automated EpHLA method was over 8 times faster in comparison to the conventional HLAMatchmaker method. In particular the EpHLA software was faster and more reliable but equally accurate as the conventional method to define AMMs for allografts. The EpHLA software is an accurate and quick method for the identification of AMMs and thus it may be a very useful tool in the decision-making process of organ allocation for highly sensitized patients as well as in many other applications.
Development and Evaluation of a Casualty Evacuation Model for a European Conflict.
1987-08-18
W Applications and Computations," lIE Transactions, 16, 2, 127-134 "- ( 1984 ).-,’’ ,., 3. Ali, A. I., Helgason, R. V., Kennington, J. L., and kall ...Part II," Mathematical Programming, 1, 6-25 ( 1971 ). 38. Held, M., Wolfe, P., and Crowder, H., "Validation of Subgradient Optimization", Mathematical...California, Los Angeles, CA, ( 1971 ). Si 66. Swoveland, C., "A Two-Stage Decomposition Algorithm for a Generalized Muticommodity Flow Problem," INFOR
Comparison of reversible methods for data compression
NASA Astrophysics Data System (ADS)
Heer, Volker K.; Reinfelder, Hans-Erich
1990-07-01
Widely differing methods for data compression described in the ACR-NEMA draft are used in medical imaging. In our contribution we will review various methods briefly and discuss the relevant advantages and disadvantages. In detail we evaluate 1st order DPCM pyramid transformation and S transformation. We compare as coding algorithms both fixed and adaptive Huffman coding and Lempel-Ziv coding. Our comparison is performed on typical medical images from CT MR DSA and DLR (Digital Luminescence Radiography). Apart from the achieved compression factors we take into account CPU time required and main memory requirement both for compression and for decompression. For a realistic comparison we have implemented the mentioned algorithms in the C program language on a MicroVAX II and a SPARC station 1. 2.
A Large-Scale Assessment of Nucleic Acids Binding Site Prediction Programs
Miao, Zhichao; Westhof, Eric
2015-01-01
Computational prediction of nucleic acid binding sites in proteins are necessary to disentangle functional mechanisms in most biological processes and to explore the binding mechanisms. Several strategies have been proposed, but the state-of-the-art approaches display a great diversity in i) the definition of nucleic acid binding sites; ii) the training and test datasets; iii) the algorithmic methods for the prediction strategies; iv) the performance measures and v) the distribution and availability of the prediction programs. Here we report a large-scale assessment of 19 web servers and 3 stand-alone programs on 41 datasets including more than 5000 proteins derived from 3D structures of protein-nucleic acid complexes. Well-defined binary assessment criteria (specificity, sensitivity, precision, accuracy…) are applied. We found that i) the tools have been greatly improved over the years; ii) some of the approaches suffer from theoretical defects and there is still room for sorting out the essential mechanisms of binding; iii) RNA binding and DNA binding appear to follow similar driving forces and iv) dataset bias may exist in some methods. PMID:26681179
Yavuz, Sevtap Caglar; Sabanci, Nazmiye; Saripinar, Emin
2018-01-01
The EC-GA method was employed in this study as a 4D-QSAR method, for the identification of the pharmacophore (Pha) of ruthenium(II) arene complex derivatives and quantitative prediction of activity. The arrangement of the computed geometric and electronic parameters for atoms and bonds of each compound occurring in a matrix is known as the electron-conformational matrix of congruity (ECMC). It contains the data from HF/3-21G level calculations. Compounds were represented by a group of conformers for each compound rather than a single conformation, known as fourth dimension to generate the model. ECMCs were compared within a certain range of tolerance values by using the EMRE program and the responsible pharmacophore group for ruthenium(II) arene complex derivatives was found. For selecting the sub-parameter which had the most effect on activity in the series and the calculation of theoretical activity values, the non-linear least square method and genetic algorithm which are included in the EMRE program were used. In addition, compounds were classified as the training and test set and the accuracy of the models was tested by cross-validation statistically. The model for training and test sets attained by the optimum 10 parameters gave highly satisfactory results with R2 training= 0.817, q 2=0.718 and SEtraining=0.066, q2 ext1 = 0.867, q2 ext2 = 0.849, q2 ext3 =0.895, ccctr = 0.895, ccctest = 0.930 and cccall = 0.905. Since there is no 4D-QSAR research on metal based organic complexes in the literature, this study is original and gives a powerful tool to the design of novel and selective ruthenium(II) arene complexes. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Sambot II: A self-assembly modular swarm robot
NASA Astrophysics Data System (ADS)
Zhang, Yuchao; Wei, Hongxing; Yang, Bo; Jiang, Cancan
2018-04-01
The new generation of self-assembly modular swarm robot Sambot II, based on the original generation of self-assembly modular swarm robot Sambot, adopting laser and camera module for information collecting, is introduced in this manuscript. The visual control algorithm of Sambot II is detailed and feasibility of the algorithm is verified by the laser and camera experiments. At the end of this manuscript, autonomous docking experiments of two Sambot II robots are presented. The results of experiments are showed and analyzed to verify the feasibility of whole scheme of Sambot II.
NASA Astrophysics Data System (ADS)
Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.
2017-12-01
Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.
Model-Based Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Kumar, Aditya; Viassolo, Daniel
2008-01-01
The Model Based Fault Tolerant Control (MBFTC) task was conducted under the NASA Aviation Safety and Security Program. The goal of MBFTC is to develop and demonstrate real-time strategies to diagnose and accommodate anomalous aircraft engine events such as sensor faults, actuator faults, or turbine gas-path component damage that can lead to in-flight shutdowns, aborted take offs, asymmetric thrust/loss of thrust control, or engine surge/stall events. A suite of model-based fault detection algorithms were developed and evaluated. Based on the performance and maturity of the developed algorithms two approaches were selected for further analysis: (i) multiple-hypothesis testing, and (ii) neural networks; both used residuals from an Extended Kalman Filter to detect the occurrence of the selected faults. A simple fusion algorithm was implemented to combine the results from each algorithm to obtain an overall estimate of the identified fault type and magnitude. The identification of the fault type and magnitude enabled the use of an online fault accommodation strategy to correct for the adverse impact of these faults on engine operability thereby enabling continued engine operation in the presence of these faults. The performance of the fault detection and accommodation algorithm was extensively tested in a simulation environment.
Nishida, Takahiro; Sonoda, Hiromichi; Oishi, Yasuhisa; Tanoue, Yoshihisa; Nakashima, Atsuhiro; Shiokawa, Yuichi; Tominaga, Ryuji
2014-04-01
The European System for Cardiac Operative Risk Evaluation (EuroSCORE) II was developed to improve the overestimation of surgical risk associated with the original (additive and logistic) EuroSCOREs. The purpose of this study was to evaluate the significance of the EuroSCORE II by comparing its performance with that of the original EuroSCOREs in Japanese patients undergoing surgery on the thoracic aorta. We have calculated the predicted mortalities according to the additive EuroSCORE, logistic EuroSCORE and EuroSCORE II algorithms in 461 patients who underwent surgery on the thoracic aorta during a period of 20 years (1993-2013). The actual in-hospital mortality rates in the low- (additive EuroSCORE of 3-6), moderate- (7-11) and high-risk (≥11) groups (followed by overall mortality) were 1.3, 6.2 and 14.4% (7.2% overall), respectively. Among the three different risk groups, the expected mortality rates were 5.5 ± 0.6, 9.1 ± 0.7 and 13.5 ± 0.2% (9.5 ± 0.1% overall) by the additive EuroSCORE algorithm, 5.3 ± 0.1, 16 ± 0.4 and 42.4 ± 1.3% (19.9 ± 0.7% overall) by the logistic EuroSCORE algorithm and 1.6 ± 0.1, 5.2 ± 0.2 and 18.5 ± 1.3% (7.4 ± 0.4% overall) by the EuroSCORE II algorithm, indicating poor prediction (P < 0.0001) of the mortality in the high-risk group, especially by the logistic EuroSCORE. The areas under the receiver operating characteristic curves of the additive EuroSCORE, logistic EuroSCORE and EuroSCORE II algorithms were 0.6937, 0.7169 and 0.7697, respectively. Thus, the mortality expected by the EuroSCORE II more closely matched the actual mortality in all three risk groups. In contrast, the mortality expected by the logistic EuroSCORE overestimated the risks in the moderate- (P = 0.0002) and high-risk (P < 0.0001) patient groups. Although all of the original EuroSCOREs and EuroSCORE II appreciably predicted the surgical mortality for thoracic aortic surgery in Japanese patients, the EuroSCORE II best predicted the mortalities in all risk groups.
Guidance and Control Algorithms for the Mars Entry, Descent and Landing Systems Analysis
NASA Technical Reports Server (NTRS)
Davis, Jody L.; CwyerCianciolo, Alicia M.; Powell, Richard W.; Shidner, Jeremy D.; Garcia-Llama, Eduardo
2010-01-01
The purpose of the Mars Entry, Descent and Landing Systems Analysis (EDL-SA) study was to identify feasible technologies that will enable human exploration of Mars, specifically to deliver large payloads to the Martian surface. This paper focuses on the methods used to guide and control two of the contending technologies, a mid- lift-to-drag (L/D) rigid aeroshell and a hypersonic inflatable aerodynamic decelerator (HIAD), through the entry portion of the trajectory. The Program to Optimize Simulated Trajectories II (POST2) is used to simulate and analyze the trajectories of the contending technologies and guidance and control algorithms. Three guidance algorithms are discussed in this paper: EDL theoretical guidance, Numerical Predictor-Corrector (NPC) guidance and Analytical Predictor-Corrector (APC) guidance. EDL-SA also considered two forms of control: bank angle control, similar to that used by Apollo and the Space Shuttle, and a center-of-gravity (CG) offset control. This paper presents the performance comparison of these guidance algorithms and summarizes the results as they impact the technology recommendations for future study.
NASA Astrophysics Data System (ADS)
Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh
2015-07-01
This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.
An improved NSGA - II algorithm for mixed model assembly line balancing
NASA Astrophysics Data System (ADS)
Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong
2018-05-01
Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.
NASA Astrophysics Data System (ADS)
Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.
2017-11-01
This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.
Optimising operational amplifiers by evolutionary algorithms and gm/Id method
NASA Astrophysics Data System (ADS)
Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.
2016-10-01
The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.
NASA Technical Reports Server (NTRS)
Russell, Philip B.; Bauman, Jill J.
2000-01-01
This SAGE II Science Team task focuses on the development of a multi-wavelength, multi- sensor Look-Up-Table (LUT) algorithm for retrieving information about stratospheric aerosols from global satellite-based observations of particulate extinction. The LUT algorithm combines the 4-wavelength SAGE II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument, thus increasing the information content available from either sensor alone. The algorithm uses the SAGE II/CLAES composite spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub g).
DoE Phase II SBIR: Spectrally-Assisted Vehicle Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villeneuve, Pierre V.
2013-02-28
The goal of this Phase II SBIR is to develop a prototype software package to demonstrate spectrally-aided vehicle tracking performance. The primary application is to demonstrate improved target vehicle tracking performance in complex environments where traditional spatial tracker systems may show reduced performance. Example scenarios in Figure 1 include a) the target vehicle obscured by a large structure for an extended period of time, or b), the target engaging in extreme maneuvers amongst other civilian vehicles. The target information derived from spatial processing is unable to differentiate between the green versus the red vehicle. Spectral signature exploitation enables comparison ofmore » new candidate targets with existing track signatures. The ambiguity in this confusing scenario is resolved by folding spectral analysis results into each target nomination and association processes. Figure 3 shows a number of example spectral signatures from a variety of natural and man-made materials. The work performed over the two-year effort was divided into three general areas: algorithm refinement, software prototype development, and prototype performance demonstration. The tasks performed under this Phase II to accomplish the program goals were as follows: 1. Acquire relevant vehicle target datasets to support prototype. 2. Refine algorithms for target spectral feature exploitation. 3. Implement a prototype multi-hypothesis target tracking software package. 4. Demonstrate and quantify tracking performance using relevant data.« less
NASA Astrophysics Data System (ADS)
Braun, N.; Hauth, T.; Pulvermacher, C.; Ritter, M.
2017-10-01
Today’s analyses for high-energy physics (HEP) experiments involve processing a large amount of data with highly specialized algorithms. The contemporary workflow from recorded data to final results is based on the execution of small scripts - often written in Python or ROOT macros which call complex compiled algorithms in the background - to perform fitting procedures and generate plots. During recent years interactive programming environments, such as Jupyter, became popular. Jupyter allows to develop Python-based applications, so-called notebooks, which bundle code, documentation and results, e.g. plots. Advantages over classical script-based approaches is the feature to recompute only parts of the analysis code, which allows for fast and iterative development, and a web-based user frontend, which can be hosted centrally and only requires a browser on the user side. In our novel approach, Python and Jupyter are tightly integrated into the Belle II Analysis Software Framework (basf2), currently being developed for the Belle II experiment in Japan. This allows to develop code in Jupyter notebooks for every aspect of the event simulation, reconstruction and analysis chain. These interactive notebooks can be hosted as a centralized web service via jupyterhub with docker and used by all scientists of the Belle II Collaboration. Because of its generality and encapsulation, the setup can easily be scaled to large installations.
Advanced Avionics Verification and Validation Phase II (AAV&V-II)
1999-01-01
Algorithm 2-8 2.7 The Weak Control Dependence Algorithm 2-8 2.8 The Indirect Dependence Algorithms 2-9 2.9 Improvements to the Pleiades Object...describes some modifications made to the Pleiades object management system to increase the speed of the analysis. 2.1 THE INTERPROCEDURAL CONTROL FLOW...slow as the edges in the graph increased. The time to insert edges was addressed by enhancements to the Pleiades object management system, which are
Yang, Yu; Fritzsching, Keith J; Hong, Mei
2013-11-01
A multi-objective genetic algorithm is introduced to predict the assignment of protein solid-state NMR (SSNMR) spectra with partial resonance overlap and missing peaks due to broad linewidths, molecular motion, and low sensitivity. This non-dominated sorting genetic algorithm II (NSGA-II) aims to identify all possible assignments that are consistent with the spectra and to compare the relative merit of these assignments. Our approach is modeled after the recently introduced Monte-Carlo simulated-annealing (MC/SA) protocol, with the key difference that NSGA-II simultaneously optimizes multiple assignment objectives instead of searching for possible assignments based on a single composite score. The multiple objectives include maximizing the number of consistently assigned peaks between multiple spectra ("good connections"), maximizing the number of used peaks, minimizing the number of inconsistently assigned peaks between spectra ("bad connections"), and minimizing the number of assigned peaks that have no matching peaks in the other spectra ("edges"). Using six SSNMR protein chemical shift datasets with varying levels of imperfection that was introduced by peak deletion, random chemical shift changes, and manual peak picking of spectra with moderately broad linewidths, we show that the NSGA-II algorithm produces a large number of valid and good assignments rapidly. For high-quality chemical shift peak lists, NSGA-II and MC/SA perform similarly well. However, when the peak lists contain many missing peaks that are uncorrelated between different spectra and have chemical shift deviations between spectra, the modified NSGA-II produces a larger number of valid solutions than MC/SA, and is more effective at distinguishing good from mediocre assignments by avoiding the hazard of suboptimal weighting factors for the various objectives. These two advantages, namely diversity and better evaluation, lead to a higher probability of predicting the correct assignment for a larger number of residues. On the other hand, when there are multiple equally good assignments that are significantly different from each other, the modified NSGA-II is less efficient than MC/SA in finding all the solutions. This problem is solved by a combined NSGA-II/MC algorithm, which appears to have the advantages of both NSGA-II and MC/SA. This combination algorithm is robust for the three most difficult chemical shift datasets examined here and is expected to give the highest-quality de novo assignment of challenging protein NMR spectra.
δ-Similar Elimination to Enhance Search Performance of Multiobjective Evolutionary Algorithms
NASA Astrophysics Data System (ADS)
Aguirre, Hernán; Sato, Masahiko; Tanaka, Kiyoshi
In this paper, we propose δ-similar elimination to improve the search performance of multiobjective evolutionary algorithms in combinatorial optimization problems. This method eliminates similar individuals in objective space to fairly distribute selection among the different regions of the instantaneous Pareto front. We investigate four eliminating methods analyzing their effects using NSGA-II. In addition, we compare the search performance of NSGA-II enhanced by our method and NSGA-II enhanced by controlled elitism.
NASA Astrophysics Data System (ADS)
Zhang, Yongjun; Lu, Zhixin
2017-10-01
Spectrum resources are very precious, so it is increasingly important to locate interference signals rapidly. Convex programming algorithms in wireless sensor networks are often used as localization algorithms. But in view of the traditional convex programming algorithm is too much overlap of wireless sensor nodes that bring low positioning accuracy, the paper proposed a new algorithm. Which is mainly based on the traditional convex programming algorithm, the spectrum car sends unmanned aerial vehicles (uses) that can be used to record data periodically along different trajectories. According to the probability density distribution, the positioning area is segmented to further reduce the location area. Because the algorithm only increases the communication process of the power value of the unknown node and the sensor node, the advantages of the convex programming algorithm are basically preserved to realize the simple and real-time performance. The experimental results show that the improved algorithm has a better positioning accuracy than the original convex programming algorithm.
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537
Using machine learning algorithms to guide rehabilitation planning for home care clients.
Zhu, Mu; Zhang, Zhanyang; Hirdes, John P; Stolee, Paul
2007-12-20
Targeting older clients for rehabilitation is a clinical challenge and a research priority. We investigate the potential of machine learning algorithms - Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) - to guide rehabilitation planning for home care clients. This study is a secondary analysis of data on 24,724 longer-term clients from eight home care programs in Ontario. Data were collected with the RAI-HC assessment system, in which the Activities of Daily Living Clinical Assessment Protocol (ADLCAP) is used to identify clients with rehabilitation potential. For study purposes, a client is defined as having rehabilitation potential if there was: i) improvement in ADL functioning, or ii) discharge home. SVM and KNN results are compared with those obtained using the ADLCAP. For comparison, the machine learning algorithms use the same functional and health status indicators as the ADLCAP. The KNN and SVM algorithms achieved similar substantially improved performance over the ADLCAP, although false positive and false negative rates were still fairly high (FP > .18, FN > .34 versus FP > .29, FN. > .58 for ADLCAP). Results are used to suggest potential revisions to the ADLCAP. Machine learning algorithms achieved superior predictions than the current protocol. Machine learning results are less readily interpretable, but can also be used to guide development of improved clinical protocols.
NASA Astrophysics Data System (ADS)
González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco
2013-12-01
This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.
Chaves, Francisco A.; Lee, Alvin H.; Nayak, Jennifer; Richards, Katherine A.; Sant, Andrea J.
2012-01-01
The ability to track CD4 T cells elicited in response to pathogen infection or vaccination is critical because of the role these cells play in protective immunity. Coupled with advances in genome sequencing of pathogenic organisms, there is considerable appeal for implementation of computer-based algorithms to predict peptides that bind to the class II molecules, forming the complex recognized by CD4 T cells. Despite recent progress in this area, there is a paucity of data regarding their success in identifying actual pathogen-derived epitopes. In this study, we sought to rigorously evaluate the performance of multiple web-available algorithms by comparing their predictions and our results using purely empirical methods for epitope discovery in influenza that utilized overlapping peptides and cytokine Elispots, for three independent class II molecules. We analyzed the data in different ways, trying to anticipate how an investigator might use these computational tools for epitope discovery. We come to the conclusion that currently available algorithms can indeed facilitate epitope discovery, but all shared a high degree of false positive and false negative predictions. Therefore, efficiencies were low. We also found dramatic disparities among algorithms and between predicted IC50 values and true dissociation rates of peptide:MHC class II complexes. We suggest that improved success of predictive algorithms will depend less on changes in computational methods or increased data sets and more on changes in parameters used to “train” the algorithms that factor in elements of T cell repertoire and peptide acquisition by class II molecules. PMID:22467652
1980-01-01
is identified in the flow chart simply as "Compute VECT’s ( predictor solution)" and "Compute V’s ( corrector solution)." A significant portion of the...TrintoTo Tm ANDera ionT SToION 28 ITIME :1 PRINCIPAL SUBROUTINES WALLPOINT (ITER,DT) ITER - iteration index for MacCormack Algorithm (ITER=1 for predictor ...WEILERSTEIN, R RAY, 6 MILLER F33615-7- C -3016UNLASSIFIED GASL-TR-254-VBL-2 AFFDL-TR-79-3162-VOL-2 NII III hImllllllllll EIEIIIIIIEIIEE EEIIIIIIIIIIII H
On adaptive learning rate that guarantees convergence in feedforward networks.
Behera, Laxmidhar; Kumar, Swagat; Patnaik, Awhan
2006-09-01
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.
Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique
NASA Astrophysics Data System (ADS)
Mahootchi, M.; Fattahi, M.; Khakbazan, E.
2011-11-01
This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.
Lyu, Heng; Li, Xiaojun; Wang, Yannan; Jin, Qi; Cao, Kai; Wang, Qiao; Li, Yunmei
2015-10-15
Fourteen field campaigns were conducted in five inland lakes during different seasons between 2006 and 2013, and a total of 398 water samples with varying optical characteristics were collected. The characteristics were analyzed based on remote sensing reflectance, and an automatic cluster two-step method was applied for water classification. The inland waters could be clustered into three types, which we labeled water types I, II and III. From water types I to III, the effect of the phytoplankton on the optical characteristics gradually decreased. Four chlorophyll-a retrieval algorithms for Case II water, a two-band, three-band, four-band and SCI (Synthetic Chlorophyll Index) algorithm were evaluated for three water types based on the MERIS bands. Different MERIS bands were used for the three water types in each of the four algorithms. The four algorithms had different levels of retrieval accuracy for each water type, and no single algorithm could be successfully applied to all water types. For water types I and III, the three-band algorithm performed the best, while the four-band algorithm had the highest retrieval accuracy for water type II. However, the three-band algorithm is preferable to the two-band algorithm for turbid eutrophic inland waters. The SCI algorithm is recommended for highly turbid water with a higher concentration of total suspended solids. Our research indicates that the chlorophyll-a concentration retrieval by remote sensing for optically contrasted inland water requires a specific algorithm that is based on the optical characteristics of inland water bodies to obtain higher estimation accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.
Exploiting Symmetry on Parallel Architectures.
NASA Astrophysics Data System (ADS)
Stiller, Lewis Benjamin
1995-01-01
This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.
Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.
A motif detection and classification method for peptide sequences using genetic programming.
Tomita, Yasuyuki; Kato, Ryuji; Okochi, Mina; Honda, Hiroyuki
2008-08-01
An exploration of common rules (property motifs) in amino acid sequences has been required for the design of novel sequences and elucidation of the interactions between molecules controlled by the structural or physical environment. In the present study, we developed a new method to search property motifs that are common in peptide sequence data. Our method comprises the following two characteristics: (i) the automatic determination of the position and length of common property motifs by calculating the physicochemical similarity of amino acids, and (ii) the quick and effective exploration of motif candidates that discriminates the positives and negatives by the introduction of genetic programming (GP). Our method was evaluated by two types of model data sets. First, the intentionally buried property motifs were searched in the artificially derived peptide data containing intentionally buried property motifs. As a result, the expected property motifs were correctly extracted by our algorithm. Second, the peptide data that interact with MHC class II molecules were analyzed as one of the models of biologically active peptides with buried motifs in various lengths. Twofold MHC class II binding peptides were identified with the rule using our method, compared to the existing scoring matrix method. In conclusion, our GP based motif searching approach enabled to obtain knowledge of functional aspects of the peptides without any prior knowledge.
Particle swarm optimization: an alternative in marine propeller optimization?
NASA Astrophysics Data System (ADS)
Vesting, F.; Bensow, R. E.
2018-01-01
This article deals with improving and evaluating the performance of two evolutionary algorithm approaches for automated engineering design optimization. Here a marine propeller design with constraints on cavitation nuisance is the intended application. For this purpose, the particle swarm optimization (PSO) algorithm is adapted for multi-objective optimization and constraint handling for use in propeller design. Three PSO algorithms are developed and tested for the optimization of four commercial propeller designs for different ship types. The results are evaluated by interrogating the generation medians and the Pareto front development. The same propellers are also optimized utilizing the well established NSGA-II genetic algorithm to provide benchmark results. The authors' PSO algorithms deliver comparable results to NSGA-II, but converge earlier and enhance the solution in terms of constraints violation.
Birnhack, Liat; Nir, Oded; Telzhenski, Marina; Lahav, Ori
2015-01-01
Deliberate struvite (MgNH4PO4) precipitation from wastewater streams has been the topic of extensive research in the last two decades and is expected to gather worldwide momentum in the near future as a P-reuse technique. A wide range of operational alternatives has been reported for struvite precipitation, including the application of various Mg(II) sources, two pH elevation techniques and several Mg:P ratios and pH values. The choice of each operational parameter within the struvite precipitation process affects process efficiency, the overall cost and also the choice of other operational parameters. Thus, a comprehensive simulation program that takes all these parameters into account is essential for process design. This paper introduces a systematic decision-supporting tool which accepts a wide range of possible operational parameters, including unconventional Mg(II) sources (i.e. seawater and seawater nanofiltration brines). The study is supplied with a free-of-charge computerized tool (http://tx.technion.ac.il/~agrengn/agr/Struvite_Program.zip) which links two computer platforms (Python and PHREEQC) for executing thermodynamic calculations according to predefined kinetic considerations. The model can be (inter alia) used for optimizing the struvite-fluidized bed reactor process operation with respect to P removal efficiency, struvite purity and economic feasibility of the chosen alternative. The paper describes the algorithm and its underlying assumptions, and shows results (i.e. effluent water quality, cost breakdown and P removal efficiency) of several case studies consisting of typical wastewaters treated at various operational conditions.
The application of dynamic programming in production planning
NASA Astrophysics Data System (ADS)
Wu, Run
2017-05-01
Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.
NASA Astrophysics Data System (ADS)
Karakostas, Spiros
2015-05-01
The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.
AMOBH: Adaptive Multiobjective Black Hole Algorithm.
Wu, Chong; Wu, Tao; Fu, Kaiyuan; Zhu, Yuan; Li, Yongbo; He, Wangyong; Tang, Shengwen
2017-01-01
This paper proposes a new multiobjective evolutionary algorithm based on the black hole algorithm with a new individual density assessment (cell density), called "adaptive multiobjective black hole algorithm" (AMOBH). Cell density has the characteristics of low computational complexity and maintains a good balance of convergence and diversity of the Pareto front. The framework of AMOBH can be divided into three steps. Firstly, the Pareto front is mapped to a new objective space called parallel cell coordinate system. Then, to adjust the evolutionary strategies adaptively, Shannon entropy is employed to estimate the evolution status. At last, the cell density is combined with a dominance strength assessment called cell dominance to evaluate the fitness of solutions. Compared with the state-of-the-art methods SPEA-II, PESA-II, NSGA-II, and MOEA/D, experimental results show that AMOBH has a good performance in terms of convergence rate, population diversity, population convergence, subpopulation obtention of different Pareto regions, and time complexity to the latter in most cases.
Flight Validation of a Metrics Driven L(sub 1) Adaptive Control
NASA Technical Reports Server (NTRS)
Dobrokhodov, Vladimir; Kitsios, Ioannis; Kaminer, Isaac; Jones, Kevin D.; Xargay, Enric; Hovakimyan, Naira; Cao, Chengyu; Lizarraga, Mariano I.; Gregory, Irene M.
2008-01-01
The paper addresses initial steps involved in the development and flight implementation of new metrics driven L1 adaptive flight control system. The work concentrates on (i) definition of appropriate control driven metrics that account for the control surface failures; (ii) tailoring recently developed L1 adaptive controller to the design of adaptive flight control systems that explicitly address these metrics in the presence of control surface failures and dynamic changes under adverse flight conditions; (iii) development of a flight control system for implementation of the resulting algorithms onboard of small UAV; and (iv) conducting a comprehensive flight test program that demonstrates performance of the developed adaptive control algorithms in the presence of failures. As the initial milestone the paper concentrates on the adaptive flight system setup and initial efforts addressing the ability of a commercial off-the-shelf AP with and without adaptive augmentation to recover from control surface failures.
On the adequacy of identified Cole Cole models
NASA Astrophysics Data System (ADS)
Xiang, Jianping; Cheng, Daizhan; Schlindwein, F. S.; Jones, N. B.
2003-06-01
The Cole-Cole model has been widely used to interpret electrical geophysical data. Normally an iterative computer program is used to invert the frequency domain complex impedance data and simple error estimation is obtained from the squared difference of the measured (field) and calculated values over the full frequency range. Recently a new direct inversion algorithm was proposed for the 'optimal' estimation of the Cole-Cole parameters, which differs from existing inversion algorithms in that the estimated parameters are direct solutions of a set of equations without the need for an initial guess for initialisation. This paper first briefly investigates the advantages and disadvantages of the new algorithm compared to the standard Levenberg-Marquardt "ridge regression" algorithm. Then, and more importantly, we address the adequacy of the models resulting from both the "ridge regression" and the new algorithm, using two different statistical tests and we give objective statistical criteria for acceptance or rejection of the estimated models. The first is the standard χ2 technique. The second is a parameter-accuracy based test that uses a joint multi-normal distribution. Numerical results that illustrate the performance of both testing methods are given. The main goals of this paper are (i) to provide the source code for the new ''direct inversion'' algorithm in Matlab and (ii) to introduce and demonstrate two methods to determine the reliability of a set of data before data processing, i.e., to consider the adequacy of the resulting Cole-Cole model.
Genetic algorithm enhanced by machine learning in dynamic aperture optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yongjun; Cheng, Weixing; Yu, Li Hua
With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less
Genetic algorithm enhanced by machine learning in dynamic aperture optimization
NASA Astrophysics Data System (ADS)
Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert
2018-05-01
With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.
Genetic algorithm enhanced by machine learning in dynamic aperture optimization
Li, Yongjun; Cheng, Weixing; Yu, Li Hua; ...
2018-05-29
With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less
Simulation of an enhanced TCAS 2 system in operation
NASA Technical Reports Server (NTRS)
Rojas, R. G.; Law, P.; Burnside, W. D.
1987-01-01
Described is a computer simulation of a Boeing 737 aircraft equipped with an enhanced Traffic and Collision Avoidance System (TCAS II). In particular, an algorithm is developed which permits the computer simulation of the tracking of a target airplane by a Boeing 373 which has a TCAS II array mounted on top of its fuselage. This algorithm has four main components: namely, the target path, the noise source, the alpha-beta filter, and threat detection. The implementation of each of these four components is described. Furthermore, the areas where the present algorithm needs to be improved are also mentioned.
Efficient Algorithm for Fuzzy Linear Programming with Multiple Objectives.
1984-12-01
constraint). Because of other reasons at least 6 of the smallest trucks were wanted in the fleet. The management wanted to use quantitative analysis and...to the price Ki. For each share the values of the criteria are multiplied by a weight gi : (66) gi = 100/ki This percentage transformation is useful in...could be useful for a repeated and promising analysis . II 92 cL ) C7 C, CJ o4-- S- 00 L C) C 4.) 0n -’ I. - 4s , 4.4- C)C) 4-) ’,0 LC) C unS. - a
Xing, Z F; Greenberg, J M
1994-08-20
The analyticity of the complex extinction efficiency is examined numerically in the size-parameter domain for homogeneous prolate and oblate spheroids and finite cylinders. The T-matrix code, which is the most efficient program available to date, is employed to calculate the individual particle-extinction efficiencies. Because of its computational limitations in the size-parameter range, a slightly modified Hilbert-transform algorithm is required to establish the analyticity numerically. The findings concerning analyticity that we reported for spheres (Astrophys. J. 399, 164-175, 1992) apply equally to these nonspherical particles.
A New Algorithm Using the Non-Dominated Tree to Improve Non-Dominated Sorting.
Gustavsson, Patrik; Syberfeldt, Anna
2018-01-01
Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows. This article presents a new, more efficient algorithm called the Efficient Non-dominated Sort with Non-Dominated Tree (ENS-NDT). ENS-NDT is an extension of the ENS-BS algorithm and uses a novel Non-Dominated Tree (NDTree) to speed up the non-dominated sorting. ENS-NDT is able to handle large population sizes and a large number of objectives more efficiently than existing algorithms for non-dominated sorting. In the article, it is shown that with ENS-NDT the runtime of multi-objective optimization algorithms such as the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) can be substantially reduced.
NASA Astrophysics Data System (ADS)
Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong
2014-03-01
A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.
Portfolio optimization by using linear programing models based on genetic algorithm
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
MM Algorithms for Geometric and Signomial Programming
Lange, Kenneth; Zhou, Hua
2013-01-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.
Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming
2016-08-01
In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.
Padé approximations for Painlevé I and II transcendents
NASA Astrophysics Data System (ADS)
Novokshenov, V. Yu.
2009-06-01
We use a version of the Fair-Luke algorithm to find the Padé approximate solutions of the Painlevé I and II equations. We find the distributions of poles for the well-known Ablowitz-Segur and Hastings-McLeod solutions of the Painlevé II equation. We show that the Boutroux tritronquée solution of the Painleé I equation has poles only in the critical sector of the complex plane. The algorithm allows checking other analytic properties of the Painlevé transcendents, such as the asymptotic behavior at infinity in the complex plane.
SAGE Version 7.0 Algorithm: Application to SAGE II
NASA Technical Reports Server (NTRS)
Damadeo, R. P; Zawodny, J. M.; Thomason, L. W.; Iyer, N.
2013-01-01
This paper details the Stratospheric Aerosol and Gas Experiments (SAGE) version 7.0 algorithm and how it is applied to SAGE II. Changes made between the previous (v6.2) and current (v7.0) versions are described and their impacts on the data products explained for both coincident event comparisons and time-series analysis. Users of the data will notice a general improvement in all of the SAGE II data products, which are now in better agreement with more modern data sets (e.g. SAGE III) and more robust for use with trend studies.
Hybrid fuzzy cluster ensemble framework for tumor clustering from biomolecular data.
Yu, Zhiwen; Chen, Hantao; You, Jane; Han, Guoqiang; Li, Le
2013-01-01
Cancer class discovery using biomolecular data is one of the most important tasks for cancer diagnosis and treatment. Tumor clustering from gene expression data provides a new way to perform cancer class discovery. Most of the existing research works adopt single-clustering algorithms to perform tumor clustering is from biomolecular data that lack robustness, stability, and accuracy. To further improve the performance of tumor clustering from biomolecular data, we introduce the fuzzy theory into the cluster ensemble framework for tumor clustering from biomolecular data, and propose four kinds of hybrid fuzzy cluster ensemble frameworks (HFCEF), named as HFCEF-I, HFCEF-II, HFCEF-III, and HFCEF-IV, respectively, to identify samples that belong to different types of cancers. The difference between HFCEF-I and HFCEF-II is that they adopt different ensemble generator approaches to generate a set of fuzzy matrices in the ensemble. Specifically, HFCEF-I applies the affinity propagation algorithm (AP) to perform clustering on the sample dimension and generates a set of fuzzy matrices in the ensemble based on the fuzzy membership function and base samples selected by AP. HFCEF-II adopts AP to perform clustering on the attribute dimension, generates a set of subspaces, and obtains a set of fuzzy matrices in the ensemble by performing fuzzy c-means on subspaces. Compared with HFCEF-I and HFCEF-II, HFCEF-III and HFCEF-IV consider the characteristics of HFCEF-I and HFCEF-II. HFCEF-III combines HFCEF-I and HFCEF-II in a serial way, while HFCEF-IV integrates HFCEF-I and HFCEF-II in a concurrent way. HFCEFs adopt suitable consensus functions, such as the fuzzy c-means algorithm or the normalized cut algorithm (Ncut), to summarize generated fuzzy matrices, and obtain the final results. The experiments on real data sets from UCI machine learning repository and cancer gene expression profiles illustrate that 1) the proposed hybrid fuzzy cluster ensemble frameworks work well on real data sets, especially biomolecular data, and 2) the proposed approaches are able to provide more robust, stable, and accurate results when compared with the state-of-the-art single clustering algorithms and traditional cluster ensemble approaches.
The PlusCal Algorithm Language
NASA Astrophysics Data System (ADS)
Lamport, Leslie
Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.
Network Analytical Tool for Monitoring Global Food Safety Highlights China
Nepusz, Tamás; Petróczi, Andrea; Naughton, Declan P.
2009-01-01
Background The Beijing Declaration on food safety and security was signed by over fifty countries with the aim of developing comprehensive programs for monitoring food safety and security on behalf of their citizens. Currently, comprehensive systems for food safety and security are absent in many countries, and the systems that are in place have been developed on different principles allowing poor opportunities for integration. Methodology/Principal Findings We have developed a user-friendly analytical tool based on network approaches for instant customized analysis of food alert patterns in the European dataset from the Rapid Alert System for Food and Feed. Data taken from alert logs between January 2003 – August 2008 were processed using network analysis to i) capture complexity, ii) analyze trends, and iii) predict possible effects of interventions by identifying patterns of reporting activities between countries. The detector and transgressor relationships are readily identifiable between countries which are ranked using i) Google's PageRank algorithm and ii) the HITS algorithm of Kleinberg. The program identifies Iran, China and Turkey as the transgressors with the largest number of alerts. However, when characterized by impact, counting the transgressor index and the number of countries involved, China predominates as a transgressor country. Conclusions/Significance This study reports the first development of a network analysis approach to inform countries on their transgressor and detector profiles as a user-friendly aid for the adoption of the Beijing Declaration. The ability to instantly access the country-specific components of the several thousand annual reports will enable each country to identify the major transgressors and detectors within its trading network. Moreover, the tool can be used to monitor trading countries for improved detector/transgressor ratios. PMID:19688088
Verifying a Computer Algorithm Mathematically.
ERIC Educational Resources Information Center
Olson, Alton T.
1986-01-01
Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)
Coordination Logic for Repulsive Resolution Maneuvers
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony J.; Munoz, Cesar A.; Dutle, Aaron M.
2016-01-01
This paper presents an algorithm for determining the direction an aircraft should maneuver in the event of a potential conflict with another aircraft. The algorithm is implicitly coordinated, meaning that with perfectly reliable computations and information, it will in- dependently provide directional information that is guaranteed to be coordinated without any additional information exchange or direct communication. The logic is inspired by the logic of TCAS II, the airborne system designed to reduce the risk of mid-air collisions between aircraft. TCAS II provides pilots with only vertical resolution advice, while the proposed algorithm, using a similar logic, provides implicitly coordinated vertical and horizontal directional advice.
AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.
Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou
2017-01-01
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
A hybrid multi-objective evolutionary algorithm for wind-turbine blade optimization
NASA Astrophysics Data System (ADS)
Sessarego, M.; Dixon, K. R.; Rival, D. E.; Wood, D. H.
2015-08-01
A concurrent-hybrid non-dominated sorting genetic algorithm (hybrid NSGA-II) has been developed and applied to the simultaneous optimization of the annual energy production, flapwise root-bending moment and mass of the NREL 5 MW wind-turbine blade. By hybridizing a multi-objective evolutionary algorithm (MOEA) with gradient-based local search, it is believed that the optimal set of blade designs could be achieved in lower computational cost than for a conventional MOEA. To measure the convergence between the hybrid and non-hybrid NSGA-II on a wind-turbine blade optimization problem, a computationally intensive case was performed using the non-hybrid NSGA-II. From this particular case, a three-dimensional surface representing the optimal trade-off between the annual energy production, flapwise root-bending moment and blade mass was achieved. The inclusion of local gradients in the blade optimization, however, shows no improvement in the convergence for this three-objective problem.
49 CFR 236.1033 - Communications and security requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...
49 CFR 236.1033 - Communications and security requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...
49 CFR 236.1033 - Communications and security requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...
49 CFR 236.1033 - Communications and security requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...
49 CFR 236.1033 - Communications and security requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...
Genetic algorithms using SISAL parallel programming language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tejada, S.
1994-05-06
Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.
NASA Astrophysics Data System (ADS)
Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.
2018-03-01
Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.
Software For Genetic Algorithms
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steve E.
1992-01-01
SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.
40 CFR 52.1131 - Control strategy: Particulate matter.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) (PSD program only), (D)(i)(II) (PSD program only), (D)(ii), and (J) (PSD program only). (e) Approval...) (PSD program only), (D)(i)(II) (PSD program only), (D)(ii), and (J) (PSD program only). [45 FR 2044...
40 CFR 52.1131 - Control strategy: Particulate matter.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) (PSD program only), (D)(i)(II) (PSD program only), (D)(ii), and (J) (PSD program only). (e) Approval...) (PSD program only), (D)(i)(II) (PSD program only), (D)(ii), and (J) (PSD program only). [45 FR 2044...
Comparison of optimization algorithms in intensity-modulated radiation therapy planning
NASA Astrophysics Data System (ADS)
Kendrick, Rachel
Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.
Moreira, Gustavo M. S. G.; Conceição, Fabricio R.; McBride, Alan J. A.; Pinto, Luciano da S.
2013-01-01
Bauhinia variegata lectins (BVL-I and BVL-II) are single chain lectins isolated from the plant Bauhinia variegata. Single chain lectins undergo post-translational processing on its N-terminal and C-terminal regions, which determines their physiological targeting, carbohydrate binding activity and pattern of quaternary association. These two lectins are isoforms, BVL-I being highly glycosylated, and thus far, it has not been possible to determine their structures. The present study used prediction and validation algorithms to elucidate the likely structures of BVL-I and -II. The program Bhageerath-H was chosen from among three different structure prediction programs due to its better overall reliability. In order to predict the C-terminal region cleavage sites, other lectins known to have this modification were analysed and three rules were created: (1) the first amino acid of the excised peptide is small or hydrophobic; (2) the cleavage occurs after an acid, polar, or hydrophobic residue, but not after a basic one; and (3) the cleavage spot is located 5-8 residues after a conserved Leu amino acid. These rules predicted that BVL-I and –II would have fifteen C-terminal residues cleaved, and this was confirmed experimentally by Edman degradation sequencing of BVL-I. Furthermore, the C-terminal analyses predicted that only BVL-II underwent α-helical folding in this region, similar to that seen in SBA and DBL. Conversely, BVL-I and -II contained four conserved regions of a GS-I association, providing evidence of a previously undescribed X4+unusual oligomerisation between the truncated BVL-I and the intact BVL-II. This is the first report on the structural analysis of lectins from Bauhinia spp. and therefore is important for the characterisation C-terminal cleavage and patterns of quaternary association of single chain lectins. PMID:24260572
Moreira, Gustavo M S G; Conceição, Fabricio R; McBride, Alan J A; Pinto, Luciano da S
2013-01-01
Bauhinia variegata lectins (BVL-I and BVL-II) are single chain lectins isolated from the plant Bauhinia variegata. Single chain lectins undergo post-translational processing on its N-terminal and C-terminal regions, which determines their physiological targeting, carbohydrate binding activity and pattern of quaternary association. These two lectins are isoforms, BVL-I being highly glycosylated, and thus far, it has not been possible to determine their structures. The present study used prediction and validation algorithms to elucidate the likely structures of BVL-I and -II. The program Bhageerath-H was chosen from among three different structure prediction programs due to its better overall reliability. In order to predict the C-terminal region cleavage sites, other lectins known to have this modification were analysed and three rules were created: (1) the first amino acid of the excised peptide is small or hydrophobic; (2) the cleavage occurs after an acid, polar, or hydrophobic residue, but not after a basic one; and (3) the cleavage spot is located 5-8 residues after a conserved Leu amino acid. These rules predicted that BVL-I and -II would have fifteen C-terminal residues cleaved, and this was confirmed experimentally by Edman degradation sequencing of BVL-I. Furthermore, the C-terminal analyses predicted that only BVL-II underwent α-helical folding in this region, similar to that seen in SBA and DBL. Conversely, BVL-I and -II contained four conserved regions of a GS-I association, providing evidence of a previously undescribed X4+unusual oligomerisation between the truncated BVL-I and the intact BVL-II. This is the first report on the structural analysis of lectins from Bauhinia spp. and therefore is important for the characterisation C-terminal cleavage and patterns of quaternary association of single chain lectins.
Koblavi-Dème, Stéphania; Maurice, Chantal; Yavo, Daniel; Sibailly, Toussaint S.; N′guessan, Kabran; Kamelan-Tano, Yvonne; Wiktor, Stefan Z.; Roels, Thierry H.; Chorba, Terence; Nkengasong, John N.
2001-01-01
To evaluate serologic testing algorithms for human immunodeficiency virus (HIV) based on a combination of rapid assays among persons with HIV-1 (non-B subtypes) infection, HIV-2 infection, and HIV-1–HIV-2 dual infections in Abidjan, Ivory Coast, a total of 1,216 sera with known HIV serologic status were used to evaluate the sensitivity and specificity of four rapid assays: Determine HIV-1/2, Capillus HIV-1/HIV-2, HIV-SPOT, and Genie II HIV-1/HIV-2. Two serum panels obtained from patients recently infected with HIV-1 subtypes B and non-B were also included. Based on sensitivity and specificity, three of the four rapid assays were evaluated prospectively in parallel (serum samples tested by two simultaneous rapid assays) and serial (serum samples tested by two consecutive rapid assays) testing algorithms. All assays were 100% sensitive, and specificities ranged from 99.4 to 100%. In the prospective evaluation, both the parallel and serial algorithms were 100% sensitive and specific. Our results suggest that rapid assays have high sensitivity and specificity and, when used in parallel or serial testing algorithms, yield results similar to those of enzyme-linked immunosorbent assay-based testing strategies. HIV serodiagnosis based on rapid assays may be a valuable alternative in implementing HIV prevention and surveillance programs in areas where sophisticated laboratories are difficult to establish. PMID:11325995
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples.
Graphical programming interface: A development environment for MRI methods.
Zwart, Nicholas R; Pipe, James G
2015-11-01
To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.
Optimal Load Shedding and Generation Rescheduling for Overload Suppression in Large Power Systems.
NASA Astrophysics Data System (ADS)
Moon, Young-Hyun
Ever-increasing size, complexity and operation costs in modern power systems have stimulated the intensive study of an optimal Load Shedding and Generator Rescheduling (LSGR) strategy in the sense of a secure and economic system operation. The conventional approach to LSGR has been based on the application of LP (Linear Programming) with the use of an approximately linearized model, and the LP algorithm is currently considered to be the most powerful tool for solving the LSGR problem. However, all of the LP algorithms presented in the literature essentially lead to the following disadvantages: (i) piecewise linearization involved in the LP algorithms requires the introduction of a number of new inequalities and slack variables, which creates significant burden to the computing facilities, and (ii) objective functions are not formulated in terms of the state variables of the adopted models, resulting in considerable numerical inefficiency in the process of computing the optimal solution. A new approach is presented, based on the development of a new linearized model and on the application of QP (Quadratic Programming). The changes in line flows as a result of changes to bus injection power are taken into account in the proposed model by the introduction of sensitivity coefficients, which avoids the mentioned second disadvantages. A precise method to calculate these sensitivity coefficients is given. A comprehensive review of the theory of optimization is included, in which results of the development of QP algorithms for LSGR as based on Wolfe's method and Kuhn -Tucker theory are evaluated in detail. The validity of the proposed model and QP algorithms has been verified and tested on practical power systems, showing the significant reduction of both computation time and memory requirements as well as the expected lower generation costs of the optimal solution as compared with those obtained from computing the optimal solution with LP. Finally, it is noted that an efficient reactive power compensation algorithm is developed to suppress voltage disturbances due to load sheddings, and that a new method for multiple contingency simulation is presented.
Dynamic Appliances Scheduling in Collaborative MicroGrids System
Bilil, Hasnae; Aniba, Ghassane; Gharavi, Hamid
2017-01-01
In this paper a new approach which is based on a collaborative system of MicroGrids (MG’s), is proposed to enable household appliance scheduling. To achieve this, appliances are categorized into flexible and non-flexible Deferrable Loads (DL’s), according to their electrical components. We propose a dynamic scheduling algorithm where users can systematically manage the operation of their electric appliances. The main challenge is to develop a flattening function calculus (reshaping) for both flexible and non-flexible DL’s. In addition, implementation of the proposed algorithm would require dynamically analyzing two successive multi-objective optimization (MOO) problems. The first targets the activation schedule of non-flexible DL’s and the second deals with the power profiles of flexible DL’s. The MOO problems are resolved by using a fast and elitist multi-objective genetic algorithm (NSGA-II). Finally, in order to show the efficiency of the proposed approach, a case study of a collaborative system that consists of 40 MG’s registered in the load curve for the flattening program has been developed. The results verify that the load curve can indeed become very flat by applying the proposed scheduling approach. PMID:28824226
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
A real time microcomputer implementation of sensor failure detection for turbofan engines
NASA Technical Reports Server (NTRS)
Delaat, John C.; Merrill, Walter C.
1989-01-01
An algorithm was developed which detects, isolates, and accommodates sensor failures using analytical redundancy. The performance of this algorithm was demonstrated on a full-scale F100 turbofan engine. The algorithm was implemented in real-time on a microprocessor-based controls computer which includes parallel processing and high order language programming. Parallel processing was used to achieve the required computational power for the real-time implementation. High order language programming was used in order to reduce the programming and maintenance costs of the algorithm implementation software. The sensor failure algorithm was combined with an existing multivariable control algorithm to give a complete control implementation with sensor analytical redundancy. The real-time microprocessor implementation of the algorithm which resulted in the successful completion of the algorithm engine demonstration, is described.
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
NASA Technical Reports Server (NTRS)
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
An expert system environment for the Generic VHSIC Spaceborne Computer (GVSC)
NASA Astrophysics Data System (ADS)
Cockerham, Ann; Labhart, Jay; Rowe, Michael; Skinner, James
The authors describe a Phase II Phillips Laboratory Small Business Innovative Research (SBIR) program being performed to implement a flexible and general-purpose inference environment for embedded space and avionics applications. This inference environment is being developed in Ada and takes special advantage of the target architecture, the GVSC. The GVSC implements the MIL-STD-1750A ISA and contains enhancements to allow access of up to 8 MBytes of memory. The inference environment makes use of the Merit Enhanced Traversal Engine (METE) algorithm, which employs the latest inference and knowledge representation strategies to optimize both run-time speed and memory utilization.
Block clustering based on difference of convex functions (DC) programming and DC algorithms.
Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai
2013-10-01
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.
Algorithm Development for the Multi-Fluid Plasma Model
2011-05-30
392, Sep 1995. [13] L Chacon , DC Barnes, DA Knoll, and GH Miley. An implicit energy- conservative 2D Fokker-Planck algorithm. Journal of Computational...Physics, 157(2):618–653, 2000. [14] L Chacon , DC Barnes, DA Knoll, and GH Miley. An implicit energy- conservative 2D Fokker-Planck algorithm - II
2014-09-01
to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed
An algorithmic approach to the brain biopsy--part II.
Prayson, Richard A; Kleinschmidt-DeMasters, B K
2006-11-01
The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part II, we assist the resident in arriving at the correct diagnosis for neuropathologic lesions containing granulomatous inflammation, macrophages, or abnormal blood vessels.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Yang, Ping
2018-01-01
In this paper we make practical use of the recently developed first-principles approach to electromagnetic scattering by particles immersed in an unbounded absorbing host medium. Specifically, we introduce an actual computational tool for the calculation of pertinent far-field optical observables in the context of the classical Lorenzâ€"Mie theory. The paper summarizes the relevant theoretical formalism, explains various aspects of the corresponding numerical algorithm, specifies the input and output parameters of a FORTRAN program available at https://www.giss.nasa.gov/staff/mmishchenko/Lorenz-Mie.html, and tabulates benchmark results useful for testing purposes. This public-domain FORTRAN program enables one to solve the following two important problems: (i) simulate theoretically the reading of a remote well-collimated radiometer measuring electromagnetic scattering by an individual spherical particle or a small random group of spherical particles; and (ii) compute the single-scattering parameters that enter the vector radiative transfer equation derived directly from the Maxwell equations.
NASA Astrophysics Data System (ADS)
Mishchenko, Michael I.; Yang, Ping
2018-01-01
In this paper we make practical use of the recently developed first-principles approach to electromagnetic scattering by particles immersed in an unbounded absorbing host medium. Specifically, we introduce an actual computational tool for the calculation of pertinent far-field optical observables in the context of the classical Lorenz-Mie theory. The paper summarizes the relevant theoretical formalism, explains various aspects of the corresponding numerical algorithm, specifies the input and output parameters of a FORTRAN program available at https://www.giss.nasa.gov/staff/mmishchenko/Lorenz-Mie.html, and tabulates benchmark results useful for testing purposes. This public-domain FORTRAN program enables one to solve the following two important problems: (i) simulate theoretically the reading of a remote well-collimated radiometer measuring electromagnetic scattering by an individual spherical particle or a small random group of spherical particles; and (ii) compute the single-scattering parameters that enter the vector radiative transfer equation derived directly from the Maxwell equations.
1983-06-01
S XX3OXX, or XX37XX is found. As a result, the following two host-financed tenant support accounts currently will be treated as unit operations costs ... Horngren , Cost Accounting : A Managerial Emphasis, Prentice-Hall Inc., Englewood Cliffs, NJ, 1972. 10. D. B. Levine and J. M. Jondrow, "The...WSSC COST ALLOCATION Technical Report ~ALGORITHMS II: INSTALLATION SUPPORT 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR( S ) 9. CONTRACT OR GRANT NUMBER
Adaptive Decision Making and Coordination in Variable Structure Organizations
1994-09-01
behavior of the net. The design problem is addressed by (a) focusing on algorithms that relate structural properties of’ the Petri Net model to... behavioral characteristics; and (b) by incorporating design requirements in the Lattice algorithm. ’K94-30756 9 4 9 2 P 0 8 II083II Bl l~ll i1111 I! 14...the more resource- consuming the process is. The architecture designer has to deal with these two parameters and perform some tradeoffs. The more
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
NASA Astrophysics Data System (ADS)
Penzias, Gregory; Janowczyk, Andrew; Singanamalli, Asha; Rusu, Mirabela; Shih, Natalie; Feldman, Michael; Stricker, Phillip D.; Delprado, Warick; Tiwari, Sarita; Böhm, Maret; Haynes, Anne-Maree; Ponsky, Lee; Viswanath, Satish; Madabhushi, Anant
2016-07-01
In applications involving large tissue specimens that have been sectioned into smaller tissue fragments, manual reconstruction of a “pseudo whole-mount” histological section (PWMHS) can facilitate (a) pathological disease annotation, and (b) image registration and correlation with radiological images. We have previously presented a program called HistoStitcher, which allows for more efficient manual reconstruction than general purpose image editing tools (such as Photoshop). However HistoStitcher is still manual and hence can be laborious and subjective, especially when doing large cohort studies. In this work we present AutoStitcher, a novel automated algorithm for reconstructing PWMHSs from digitized tissue fragments. AutoStitcher reconstructs (“stitches”) a PWMHS from a set of 4 fragments by optimizing a novel cost function that is domain-inspired to ensure (i) alignment of similar tissue regions, and (ii) contiguity of the prostate boundary. The algorithm achieves computational efficiency by performing reconstruction in a multi-resolution hierarchy. Automated PWMHS reconstruction results (via AutoStitcher) were quantitatively and qualitatively compared to manual reconstructions obtained via HistoStitcher for 113 prostate pathology sections. Distances between corresponding fiducials placed on each of the automated and manual reconstruction results were between 2.7%-3.2%, reflecting their excellent visual similarity.
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
Efficient Learning Algorithms with Limited Information
ERIC Educational Resources Information Center
De, Anindya
2013-01-01
The thesis explores efficient learning algorithms in settings which are more restrictive than the PAC model of learning (Valiant) in one of the following two senses: (i) The learning algorithm has a very weak access to the unknown function, as in, it does not get labeled samples for the unknown function (ii) The error guarantee required from the…
ERIC Educational Resources Information Center
Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.
2015-01-01
We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…
ERIC Educational Resources Information Center
Saltan, Fatih
2017-01-01
Online Algorithm Visualization (OAV) is one of the recent developments in the instructional technology field that aims to help students handle difficulties faced when they begin to learn programming. This study aims to investigate the effect of online algorithm visualization on students' achievement in the introduction to programming course. To…
Employability Planning Process. STIP II (Skill Training Improvement Programs Round II).
ERIC Educational Resources Information Center
Los Angeles Community Coll. District, CA.
Four reports are presented detailing procedures for improving the employability of students enrolled in the Los Angeles Community College District's Skill Training Improvement Programs (STIP II). Each report was submitted by one of the four STIP II programs: Los Angeles Southwest College's program for computer programming; the programs for…
Shen, Peiping; Zhang, Tongli; Wang, Chunfeng
2017-01-01
This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.
Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng
2015-01-01
Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15 existing algorithms for comparison, our web tool benefits researchers who are eager to comprehensively and objectively evaluate the performance of their newly developed algorithm. Thus, our tool greatly expedites the progress in the research of computational identification of cooperative TF pairs.
2015-01-01
Background Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. Results The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Conclusions Allowing users to select eight existing performance indices and 15 existing algorithms for comparison, our web tool benefits researchers who are eager to comprehensively and objectively evaluate the performance of their newly developed algorithm. Thus, our tool greatly expedites the progress in the research of computational identification of cooperative TF pairs. PMID:26677932
MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION
In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...
Valdés, Julio J; Barton, Alan J
2007-05-01
A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming.
Wang, Haizhou; Song, Mingzhou
2011-12-01
The heuristic k -means algorithm, widely used for cluster analysis, does not guarantee optimality. We developed a dynamic programming algorithm for optimal one-dimensional clustering. The algorithm is implemented as an R package called Ckmeans.1d.dp . We demonstrate its advantage in optimality and runtime over the standard iterative k -means algorithm.
Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Vanderburg, J. D.
1977-01-01
A vehicle geometric definition based upon quadrilateral surface elements to produce realistic pictures of an aerospace vehicle. The PCSYS programs can be used to visually check geometric data input, monitor geometric perturbations, and to visualize the complex spatial inter-relationships between the internal and external vehicle components. PCSYS has two major component programs. The between program, IMAGE, draws a complex aerospace vehicle pictorial representation based on either an approximate but rapid hidden line algorithm or without any hidden line algorithm. The second program, HIDDEN, draws a vehicle representation using an accurate but time consuming hidden line algorithm.
Communications oriented programming of parallel iterative solutions of sparse linear systems
NASA Technical Reports Server (NTRS)
Patrick, M. L.; Pratt, T. W.
1986-01-01
Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.
Taniguchi, Masahiko; Du, Hai; Lindsey, Jonathan S
2013-09-23
A wide variety of cyclic molecular architectures are built of modular subunits and can be formed combinatorially. The mathematics for enumeration of such objects is well-developed yet lacks key features of importance in chemistry, such as specifying (i) the structures of individual members among a set of isomers, (ii) the distribution (i.e., relative amounts) of products, and (iii) the effect of nonequal ratios of reacting monomers on the product distribution. Here, a software program (Cyclaplex) has been developed to determine the number, identity (including isomers), and relative amounts of linear and cyclic architectures from a given number and ratio of reacting monomers. The program includes both mathematical formulas and generative algorithms for enumeration; the latter go beyond the former to provide desired molecular-relevant information and data-mining features. The program is equipped to enumerate four types of architectures: (i) linear architectures with directionality (macroscopic equivalent = electrical extension cords), (ii) linear architectures without directionality (batons), (iii) cyclic architectures with directionality (necklaces), and (iv) cyclic architectures without directionality (bracelets). The program can be applied to cyclic peptides, cycloveratrylenes, cyclens, calixarenes, cyclodextrins, crown ethers, cucurbiturils, annulenes, expanded meso-substituted porphyrin(ogen)s, and diverse supramolecular (e.g., protein) assemblies. The size of accessible architectures encompasses up to 12 modular subunits derived from 12 reacting monomers or larger architectures (e.g. 13-17 subunits) from fewer types of monomers (e.g. 2-4). A particular application concerns understanding the possible heterogeneity of (natural or biohybrid) photosynthetic light-harvesting oligomers (cyclic, linear) formed from distinct peptide subunits.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Development of closed-loop supply chain network in terms of corporate social responsibility.
Pedram, Ali; Pedram, Payam; Yusoff, Nukman Bin; Sorooshian, Shahryar
2017-01-01
Due to the rise in awareness of environmental issues and the depletion of virgin resources, many firms have attempted to increase the sustainability of their activities. One efficient way to elevate sustainability is the consideration of corporate social responsibility (CSR) by designing a closed loop supply chain (CLSC). This paper has developed a mathematical model to increase corporate social responsibility in terms of job creation. Moreover the model, in addition to increasing total CLSC profit, provides a range of strategic decision solutions for decision makers to select a best action plan for a CLSC. A proposed multi-objective mixed-integer linear programming (MILP) model was solved with non-dominated sorting genetic algorithm II (NSGA-II). Fuzzy set theory was employed to select the best compromise solution from the Pareto-optimal solutions. A numerical example was used to validate the potential application of the proposed model. The results highlight the effect of CSR in the design of CLSC.
Development of closed–loop supply chain network in terms of corporate social responsibility
Pedram, Payam; Yusoff, Nukman Bin; Sorooshian, Shahryar
2017-01-01
Due to the rise in awareness of environmental issues and the depletion of virgin resources, many firms have attempted to increase the sustainability of their activities. One efficient way to elevate sustainability is the consideration of corporate social responsibility (CSR) by designing a closed loop supply chain (CLSC). This paper has developed a mathematical model to increase corporate social responsibility in terms of job creation. Moreover the model, in addition to increasing total CLSC profit, provides a range of strategic decision solutions for decision makers to select a best action plan for a CLSC. A proposed multi-objective mixed-integer linear programming (MILP) model was solved with non-dominated sorting genetic algorithm II (NSGA-II). Fuzzy set theory was employed to select the best compromise solution from the Pareto-optimal solutions. A numerical example was used to validate the potential application of the proposed model. The results highlight the effect of CSR in the design of CLSC. PMID:28384250
Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation
Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir
2016-05-01
We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms
NASA Technical Reports Server (NTRS)
Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.;
2010-01-01
INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.
Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization
2010-03-31
optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested
2002-01-01
UNITY program that implements exactly the same algorithm as Specification 1.1. The correctness of this program is proven in amanner sim- 4 program...chapter, we introduce the Dynamic UNITY formalism, which allows us to reason about algorithms and protocols in which the sets of participating processes...implements Euclid’s algorithm for calculating the greatest common divisor (GCD) of two integers; it repeat- edly reads an integer message from each of its
Workflow of the Grover algorithm simulation incorporating CUDA and GPGPU
NASA Astrophysics Data System (ADS)
Lu, Xiangwen; Yuan, Jiabin; Zhang, Weiwei
2013-09-01
The Grover quantum search algorithm, one of only a few representative quantum algorithms, can speed up many classical algorithms that use search heuristics. No true quantum computer has yet been developed. For the present, simulation is one effective means of verifying the search algorithm. In this work, we focus on the simulation workflow using a compute unified device architecture (CUDA). Two simulation workflow schemes are proposed. These schemes combine the characteristics of the Grover algorithm and the parallelism of general-purpose computing on graphics processing units (GPGPU). We also analyzed the optimization of memory space and memory access from this perspective. We implemented four programs on CUDA to evaluate the performance of schemes and optimization. Through experimentation, we analyzed the organization of threads suited to Grover algorithm simulations, compared the storage costs of the four programs, and validated the effectiveness of optimization. Experimental results also showed that the distinguished program on CUDA outperformed the serial program of libquantum on a CPU with a speedup of up to 23 times (12 times on average), depending on the scale of the simulation.
A quasi-Newton algorithm for large-scale nonlinear equations.
Huang, Linghua
2017-01-01
In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm's initial point does not have any restrictions; (ii) a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length [Formula: see text]. The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the [Formula: see text]-order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.
Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Rajeeva; Kumar, Aditya; Dai, Dan
2012-12-31
This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developedmore » will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve these two formulations were developed and validated. For a given OSP problem the computation efficiency largely depends on the “size” of the problem. Initially a simplified 1-D gasifier model assuming axial and azimuthal symmetry was used to test out various OSP algorithms. Finally these algorithms were used to design the optimal sensor network for condition monitoring of IGCC gasifier refractory wear and RSC fouling. The sensors type and locations obtained as solution to the OSP problem were validated using model based sensing approach. The OSP algorithm has been developed in a modular form and has been packaged as a software tool for OSP design where a designer can explore various OSP design algorithm is a user friendly way. The OSP software tool is implemented in Matlab/Simulink© in-house. The tool also uses few optimization routines that are freely available on World Wide Web. In addition a modular Extended Kalman Filter (EKF) block has also been developed in Matlab/Simulink© which can be utilized for model based sensing of important process variables that are not directly measured through combining the online sensors with model based estimation once the hardware sensor and their locations has been finalized. The OSP algorithm details and the results of applying these algorithms to obtain optimal sensor location for condition monitoring of gasifier refractory wear and RSC fouling profile are summarized in this final report.« less
Inclusive high-p $$\\perp$$ b$$\\bar{b}$$cross section measurement at √s = 1.96-TeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galyaev, Eugene N.
2006-11-01
The Run II physics program at the Tevatron started in the spring of 2001 with protons and antiprotons colliding at an energy of √s = 1.96 TeV, and is continuing with about 1.2 fb -1 of data currently collected by the CDF and D0 experiments. A measurement of the b-jet cross section as function of jet transverse momentum p more » $$\\perp$$ has been performed using 312 pb -1 of D0 data. The results for this measurement were obtained and are presented herein. A neural network algorithm was used to identify b jets.« less
A manual for a laboratory information management system (LIMS) for light stable isotopes
Coplen, Tyler B.
1997-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
A manual for a Laboratory Information Management System (LIMS) for light stable isotopes
Coplen, Tyler B.
1998-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
Algorithm and program for information processing with the filin apparatus
NASA Technical Reports Server (NTRS)
Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.
1979-01-01
The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.
NASA Astrophysics Data System (ADS)
Wu, J.; Yang, Y.; Luo, Q.; Wu, J.
2012-12-01
This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching
Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Zhang, Peng
2017-01-01
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. PMID:28885547
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.
Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng
2017-09-08
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.
RoboTAP: Target priorities for robotic microlensing observations
NASA Astrophysics Data System (ADS)
Hundertmark, M.; Street, R. A.; Tsapras, Y.; Bachelet, E.; Dominik, M.; Horne, K.; Bozza, V.; Bramich, D. M.; Cassan, A.; D'Ago, G.; Figuera Jaimes, R.; Kains, N.; Ranc, C.; Schmidt, R. W.; Snodgrass, C.; Wambsganss, J.; Steele, I. A.; Mao, S.; Ment, K.; Menzies, J.; Li, Z.; Cross, S.; Maoz, D.; Shvartzvald, Y.
2018-01-01
Context. The ability to automatically select scientifically-important transient events from an alert stream of many such events, and to conduct follow-up observations in response, will become increasingly important in astronomy. With wide-angle time domain surveys pushing to fainter limiting magnitudes, the capability to follow-up on transient alerts far exceeds our follow-up telescope resources, and effective target prioritization becomes essential. The RoboNet-II microlensing program is a pathfinder project, which has developed an automated target selection process (RoboTAP) for gravitational microlensing events, which are observed in real time using the Las Cumbres Observatory telescope network. Aims: Follow-up telescopes typically have a much smaller field of view compared to surveys, therefore the most promising microlensing events must be automatically selected at any given time from an annual sample exceeding 2000 events. The main challenge is to select between events with a high planet detection sensitivity, with the aim of detecting many planets and characterizing planetary anomalies. Methods: Our target selection algorithm is a hybrid system based on estimates of the planet detection zones around a microlens. It follows automatic anomaly alerts and respects the expected survey coverage of specific events. Results: We introduce the RoboTAP algorithm, whose purpose is to select and prioritize microlensing events with high sensitivity to planetary companions. In this work, we determine the planet sensitivity of the RoboNet follow-up program and provide a working example of how a broker can be designed for a real-life transient science program conducting follow-up observations in response to alerts; we explore the issues that will confront similar programs being developed for the Large Synoptic Survey Telescope (LSST) and other time domain surveys.
A short note on dynamic programming in a band.
Gibrat, Jean-François
2018-06-15
Third generation sequencing technologies generate long reads that exhibit high error rates, in particular for insertions and deletions which are usually the most difficult errors to cope with. The only exact algorithm capable of aligning sequences with insertions and deletions is a dynamic programming algorithm. In this note, for the sake of efficiency, we consider dynamic programming in a band. We show how to choose the band width in function of the long reads' error rates, thus obtaining an [Formula: see text] algorithm in space and time. We also propose a procedure to decide whether this algorithm, when applied to semi-global alignments, provides the optimal score. We suggest that dynamic programming in a band is well suited to the problem of aligning long reads between themselves and can be used as a core component of methods for obtaining a consensus sequence from the long reads alone. The function implementing the dynamic programming algorithm in a band is available, as a standalone program, at: https://forgemia.inra.fr/jean-francois.gibrat/BAND_DYN_PROG.git.
Using multicriteria decision analysis during drug development to predict reimbursement decisions.
Williams, Paul; Mauskopf, Josephine; Lebiecki, Jake; Kilburg, Anne
2014-01-01
Pharmaceutical companies design clinical development programs to generate the data that they believe will support reimbursement for the experimental compound. The objective of the study was to present a process for using multicriteria decision analysis (MCDA) by a pharmaceutical company to estimate the probability of a positive recommendation for reimbursement for a new drug given drug and environmental attributes. The MCDA process included 1) selection of decisions makers who were representative of those making reimbursement decisions in a specific country; 2) two pre-workshop questionnaires to identify the most important attributes and their relative importance for a positive recommendation for a new drug; 3) a 1-day workshop during which participants undertook three tasks: i) they agreed on a final list of decision attributes and their importance weights, ii) they developed level descriptions for these attributes and mapped each attribute level to a value function, and iii) they developed profiles for hypothetical products 'just likely to be reimbursed'; and 4) use of the data from the workshop to develop a prediction algorithm based on a logistic regression analysis. The MCDA process is illustrated using case studies for three countries, the United Kingdom, Germany, and Spain. The extent to which the prediction algorithms for each country captured the decision processes for the workshop participants in our case studies was tested using a post-meeting questionnaire that asked the participants to make recommendations for a set of hypothetical products. The data collected in the case study workshops resulted in a prediction algorithm: 1) for the United Kingdom, the probability of a positive recommendation for different ranges of cost-effectiveness ratios; 2) for Spain, the probability of a positive recommendation at the national and regional levels; and 3) for Germany, the probability of a determination of clinical benefit. The results from the post-meeting questionnaire revealed a high predictive value for the algorithm developed using MCDA. Prediction algorithms developed using MCDA could be used by pharmaceutical companies when designing their clinical development programs to estimate the likelihood of a favourable reimbursement recommendation for different product profiles and for different positions in the treatment pathway.
Using multicriteria decision analysis during drug development to predict reimbursement decisions
Williams, Paul; Mauskopf, Josephine; Lebiecki, Jake; Kilburg, Anne
2014-01-01
Background Pharmaceutical companies design clinical development programs to generate the data that they believe will support reimbursement for the experimental compound. Objective The objective of the study was to present a process for using multicriteria decision analysis (MCDA) by a pharmaceutical company to estimate the probability of a positive recommendation for reimbursement for a new drug given drug and environmental attributes. Methods The MCDA process included 1) selection of decisions makers who were representative of those making reimbursement decisions in a specific country; 2) two pre-workshop questionnaires to identify the most important attributes and their relative importance for a positive recommendation for a new drug; 3) a 1-day workshop during which participants undertook three tasks: i) they agreed on a final list of decision attributes and their importance weights, ii) they developed level descriptions for these attributes and mapped each attribute level to a value function, and iii) they developed profiles for hypothetical products ‘just likely to be reimbursed’; and 4) use of the data from the workshop to develop a prediction algorithm based on a logistic regression analysis. The MCDA process is illustrated using case studies for three countries, the United Kingdom, Germany, and Spain. The extent to which the prediction algorithms for each country captured the decision processes for the workshop participants in our case studies was tested using a post-meeting questionnaire that asked the participants to make recommendations for a set of hypothetical products. Results The data collected in the case study workshops resulted in a prediction algorithm: 1) for the United Kingdom, the probability of a positive recommendation for different ranges of cost-effectiveness ratios; 2) for Spain, the probability of a positive recommendation at the national and regional levels; and 3) for Germany, the probability of a determination of clinical benefit. The results from the post-meeting questionnaire revealed a high predictive value for the algorithm developed using MCDA. Conclusions Prediction algorithms developed using MCDA could be used by pharmaceutical companies when designing their clinical development programs to estimate the likelihood of a favourable reimbursement recommendation for different product profiles and for different positions in the treatment pathway.
Towards Internet QoS provisioning based on generic distributed QoS adaptive routing engine.
Haikal, Amira Y; Badawy, M; Ali, Hesham A
2014-01-01
Increasing efficiency and quality demands of modern Internet technologies drive today's network engineers to seek to provide quality of service (QoS). Internet QoS provisioning gives rise to several challenging issues. This paper introduces a generic distributed QoS adaptive routing engine (DQARE) architecture based on OSPFxQoS. The innovation of the proposed work in this paper is its undependability on the used QoS architectures and, moreover, splitting of the control strategy from data forwarding mechanisms, so we guarantee a set of absolute stable mechanisms on top of which Internet QoS can be built. DQARE architecture is furnished with three relevant traffic control schemes, namely, service differentiation, QoS routing, and traffic engineering. The main objective of this paper is to (i) provide a general configuration guideline for service differentiation, (ii) formalize the theoretical properties of different QoS routing algorithms and then introduce a QoS routing algorithm (QOPRA) based on dynamic programming technique, and (iii) propose QoS multipath forwarding (QMPF) model for paths diversity exploitation. NS2-based simulations proved the DQARE superiority in terms of delay, packet delivery ratio, throughput, and control overhead. Moreover, extensive simulations are used to compare the proposed QOPRA algorithm and QMPF model with their counterparts in the literature.
Towards Internet QoS Provisioning Based on Generic Distributed QoS Adaptive Routing Engine
Haikal, Amira Y.; Badawy, M.; Ali, Hesham A.
2014-01-01
Increasing efficiency and quality demands of modern Internet technologies drive today's network engineers to seek to provide quality of service (QoS). Internet QoS provisioning gives rise to several challenging issues. This paper introduces a generic distributed QoS adaptive routing engine (DQARE) architecture based on OSPFxQoS. The innovation of the proposed work in this paper is its undependability on the used QoS architectures and, moreover, splitting of the control strategy from data forwarding mechanisms, so we guarantee a set of absolute stable mechanisms on top of which Internet QoS can be built. DQARE architecture is furnished with three relevant traffic control schemes, namely, service differentiation, QoS routing, and traffic engineering. The main objective of this paper is to (i) provide a general configuration guideline for service differentiation, (ii) formalize the theoretical properties of different QoS routing algorithms and then introduce a QoS routing algorithm (QOPRA) based on dynamic programming technique, and (iii) propose QoS multipath forwarding (QMPF) model for paths diversity exploitation. NS2-based simulations proved the DQARE superiority in terms of delay, packet delivery ratio, throughput, and control overhead. Moreover, extensive simulations are used to compare the proposed QOPRA algorithm and QMPF model with their counterparts in the literature. PMID:25309955
Reverse engineering and analysis of large genome-scale gene networks
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-01
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249
Configuring Airspace Sectors with Approximate Dynamic Programming
NASA Technical Reports Server (NTRS)
Bloem, Michael; Gupta, Pramod
2010-01-01
In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.
Intelligent Use of CFAR Algorithms
1993-05-01
the reference windows can raise the threshold too high in many CFAR algorithms and result in masking of targets. GCMLD is a modification of CMLD that...AD-A267 755 RL-TR-93-75 III 11 III II liiI Interim Report May 1993 INTELLIGENT USE OF CFAR ALGORITHMS Kaman Sciences Corporation P. Antonik, B...AND DATES COVERED IMay 1993 Inte ’rim Jan 92 - Se2 92 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS INTELLIGENT USE OF CFAR ALGORITHMS C - F30602-91-C-0017
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
A comparison of common programming languages used in bioinformatics.
Fourment, Mathieu; Gillings, Michael R
2008-02-05
The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.
A comparison of common programming languages used in bioinformatics
Fourment, Mathieu; Gillings, Michael R
2008-01-01
Background The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Results Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from Conclusion This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language. PMID:18251993
An O({radical}nL) primal-dual affine scaling algorithm for linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Siming
1994-12-31
We present a new primal-dual affine scaling algorithm for linear programming. The search direction of the algorithm is a combination of classical affine scaling direction of Dikin and a recent new affine scaling direction of Jansen, Roos and Terlaky. The algorithm has an iteration complexity of O({radical}nL), comparing to O(nL) complexity of Jansen, Roos and Terlaky.
Programming Deep Brain Stimulation for Parkinson's Disease: The Toronto Western Hospital Algorithms.
Picillo, Marina; Lozano, Andres M; Kou, Nancy; Puppi Munhoz, Renato; Fasano, Alfonso
2016-01-01
Deep brain stimulation (DBS) is an established and effective treatment for Parkinson's disease (PD). After surgery, a number of extensive programming sessions are performed to define the most optimal stimulation parameters. Programming sessions mainly rely only on neurologist's experience. As a result, patients often undergo inconsistent and inefficient stimulation changes, as well as unnecessary visits. We reviewed the literature on initial and follow-up DBS programming procedures and integrated our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We propose four algorithms including the initial programming and specific algorithms tailored to symptoms experienced by patients following DBS: speech disturbances, stimulation-induced dyskinesia and gait impairment. We conducted a literature search of PubMed from inception to July 2014 with the keywords "deep brain stimulation", "festination", "freezing", "initial programming", "Parkinson's disease", "postural instability", "speech disturbances", and "stimulation induced dyskinesia". Seventy papers were considered for this review. Based on the literature review and our experience at TWH, we refined four algorithms for: (1) the initial programming stage, and management of symptoms following DBS, particularly addressing (2) speech disturbances, (3) stimulation-induced dyskinesia, and (4) gait impairment. We propose four algorithms tailored to an individualized approach to managing symptoms associated with DBS and disease progression in patients with PD. We encourage established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.
One cutting plane algorithm using auxiliary functions
NASA Astrophysics Data System (ADS)
Zabotin, I. Ya; Kazaeva, K. E.
2016-11-01
We propose an algorithm for solving a convex programming problem from the class of cutting methods. The algorithm is characterized by the construction of approximations using some auxiliary functions, instead of the objective function. Each auxiliary function bases on the exterior penalty function. In proposed algorithm the admissible set and the epigraph of each auxiliary function are embedded into polyhedral sets. In connection with the above, the iteration points are found by solving linear programming problems. We discuss the implementation of the algorithm and prove its convergence.
Algorithm Calculates Cumulative Poisson Distribution
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.
1992-01-01
Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2015-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
NASA Astrophysics Data System (ADS)
Zhao, Yinan; Ge, Jian; Yuan, Xiaoyong; Li, Xiaolin; Zhao, Tiffany; Wang, Cindy
2018-01-01
Metal absorption line systems in the distant quasar spectra have been used as one of the most powerful tools to probe gas content in the early Universe. The MgII λλ 2796, 2803 doublet is one of the most popular metal absorption lines and has been used to trace gas and global star formation at redshifts between ~0.5 to 2.5. In the past, machine learning algorithms have been used to detect absorption lines systems in the large sky survey, such as Principle Component Analysis, Gaussian Process and decision tree, but the overall detection process is not only complicated, but also time consuming. It usually takes a few months to go through the entire quasar spectral dataset from each of the Sloan Digital Sky Survey (SDSS) data release. In this work, we applied the deep neural network, or “ deep learning” algorithms, in the most recently SDSS DR14 quasar spectra and were able to randomly search 20000 quasar spectra and detect 2887 strong Mg II absorption features in just 9 seconds. Our detection algorithms were verified with previously released DR12 and DR7 data and published Mg II catalog and the detection accuracy is 90%. This is the first time that deep neural network has demonstrated its promising power in both speed and accuracy in replacing tedious, repetitive human work in searching for narrow absorption patterns in a big dataset. We will present our detection algorithms and also statistical results of the newly detected Mg II absorption lines.
Using Small-Step Refinement for Algorithm Verification in Computer Science Education
ERIC Educational Resources Information Center
Simic, Danijela
2015-01-01
Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…
Campos, Nicole G; Maza, Mauricio; Alfaro, Karla; Gage, Julia C; Castle, Philip E; Felix, Juan C; Cremer, Miriam L; Kim, Jane J
2015-08-15
Cervical cancer is the leading cause of cancer death among women in El Salvador. Utilizing data from the Cervical Cancer Prevention in El Salvador (CAPE) demonstration project, we assessed the health and economic impact of HPV-based screening and two different algorithms for the management of women who test HPV-positive, relative to existing Pap-based screening. We calibrated a mathematical model of cervical cancer to epidemiologic data from El Salvador and compared three screening algorithms for women aged 30-65 years: (i) HPV screening every 5 years followed by referral to colposcopy for HPV-positive women (Colposcopy Management [CM]); (ii) HPV screening every 5 years followed by treatment with cryotherapy for eligible HPV-positive women (Screen and Treat [ST]); and (iii) Pap screening every 2 years followed by referral to colposcopy for Pap-positive women (Pap). Potential harms and complications associated with overtreatment were not assessed. Under base case assumptions of 65% screening coverage, HPV-based screening was more effective than Pap, reducing cancer risk by ∼ 60% (Pap: 50%). ST was the least costly strategy, and cost $2,040 per year of life saved. ST remained the most attractive strategy as visit compliance, costs, coverage, and test performance were varied. We conclude that a screen-and-treat algorithm within an HPV-based screening program is very cost-effective in El Salvador, with a cost-effectiveness ratio below per capita GDP. © 2015 UICC.
Real-time stereo matching using orthogonal reliability-based dynamic programming.
Gong, Minglun; Yang, Yee-Hong
2007-03-01
A novel algorithm is presented in this paper for estimating reliable stereo matches in real time. Based on the dynamic programming-based technique we previously proposed, the new algorithm can generate semi-dense disparity maps using as few as two dynamic programming passes. The iterative best path tracing process used in traditional dynamic programming is replaced by a local minimum searching process, making the algorithm suitable for parallel execution. Most computations are implemented on programmable graphics hardware, which improves the processing speed and makes real-time estimation possible. The experiments on the four new Middlebury stereo datasets show that, on an ATI Radeon X800 card, the presented algorithm can produce reliable matches for 60% approximately 80% of pixels at the rate of 10 approximately 20 frames per second. If needed, the algorithm can be configured for generating full density disparity maps.
A computerized compensator design algorithm with launch vehicle applications
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.
1976-01-01
This short paper presents a computerized algorithm for the design of compensators for large launch vehicles. The algorithm is applicable to the design of compensators for linear, time-invariant, control systems with a plant possessing a single control input and multioutputs. The achievement of frequency response specifications is cast into a strict constraint mathematical programming format. An improved solution algorithm for solving this type of problem is given, along with the mathematical necessities for application to systems of the above type. A computer program, compensator improvement program (CIP), has been developed and applied to a pragmatic space-industry-related example.
Multiobjective immune algorithm with nondominated neighbor-based selection.
Gong, Maoguo; Jiao, Licheng; Du, Haifeng; Bo, Liefeng
2008-01-01
Abstract Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.
48 CFR 1852.219-81 - Limitation on subcontracting-SBIR Phase II program.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subcontracting-SBIR Phase II program. 1852.219-81 Section 1852.219-81 Federal Acquisition Regulations System... CLAUSES Texts of Provisions and Clauses 1852.219-81 Limitation on subcontracting—SBIR Phase II program. As prescribed in 1819.7302(b), insert the following clause: Limitation on Subcontracting—SBIR Phase II Program...
48 CFR 1852.219-81 - Limitation on subcontracting-SBIR Phase II program.
Code of Federal Regulations, 2013 CFR
2013-10-01
... subcontracting-SBIR Phase II program. 1852.219-81 Section 1852.219-81 Federal Acquisition Regulations System... CLAUSES Texts of Provisions and Clauses 1852.219-81 Limitation on subcontracting—SBIR Phase II program. As prescribed in 1819.7302(b), insert the following clause: Limitation on Subcontracting—SBIR Phase II Program...
48 CFR 1852.219-81 - Limitation on subcontracting-SBIR Phase II program.
Code of Federal Regulations, 2012 CFR
2012-10-01
... subcontracting-SBIR Phase II program. 1852.219-81 Section 1852.219-81 Federal Acquisition Regulations System... CLAUSES Texts of Provisions and Clauses 1852.219-81 Limitation on subcontracting—SBIR Phase II program. As prescribed in 1819.7302(b), insert the following clause: Limitation on Subcontracting—SBIR Phase II Program...
40 CFR 52.1519 - Identification of plan-conditional approval.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Air Act (CAA) elements 110(a)(2)(A), (C) only as it relates to the PSD program, (D)(i)(II) only as it relates to the PSD program, (E)(ii), and (J) only as it relates to the PSD program. This conditional... relates to the PSD program, (D)(i)(II) only as it relates to the PSD program, (E)(ii), and (J) only as it...
40 CFR 52.1519 - Identification of plan-conditional approval.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Air Act (CAA) elements 110(a)(2)(A), (C) only as it relates to the PSD program, (D)(i)(II) only as it relates to the PSD program, (E)(ii), and (J) only as it relates to the PSD program. This conditional... relates to the PSD program, (D)(i)(II) only as it relates to the PSD program, (E)(ii), and (J) only as it...
Universal algorithms and programs for calculating the motion parameters in the two-body problem
NASA Technical Reports Server (NTRS)
Bakhshiyan, B. T.; Sukhanov, A. A.
1979-01-01
The algorithms and FORTRAN programs for computing positions and velocities, orbital elements and first and second partial derivatives in the two-body problem are presented. The algorithms are applicable for any value of eccentricity and are convenient for computing various navigation parameters.
REACTT: an algorithm for solving spatial equilibrium problems.
D.J. Brooks; J. Kincaid
1987-01-01
The problem of determining equilibrium prices and quantities in spatially separated markets is reviewed. Algorithms that compute spatial equilibria are discussed. A computer program using the reactive programming algorithm for solving spatial equilibrium problems that involve multiple commodities is presented, along with detailed documentation. A sample data set,...
Algorithm for constructing the programmed motion of a bounding vehicle for the flight phase
NASA Technical Reports Server (NTRS)
Lapshin, V. V.
1979-01-01
The construction of the programmed motion of a multileg bounding vehicle in the flight was studied. An algorithm is given for solving the boundary value problem for constructing this programmed motion. If the motion is shown to be symmetrical, a simplified use of the algorithm can be applied. A method is proposed for nonimpact of the legs during lift-off of the vehicle, and for softness at touchdown. Tables are utilized to construct this programmed motion over a broad set of standard motion conditions.
NASA Technical Reports Server (NTRS)
Miernecki, Maciej; Wigneron, Jean-Pierre; Lopez-Baeza, Ernesto; Kerr, Yann; DeJeu, Richard; DeLannoy, Gabielle J. M.; Jackson, Tom J.; O'Neill, Peggy E.; Shwank, Mike; Moran, Roberto Fernandez;
2014-01-01
The objective of this study was to compare several approaches to soil moisture (SM) retrieval using L-band microwave radiometry. The comparison was based on a brightness temperature (TB) data set acquired since 2010 by the L-band radiometer ELBARA-II over a vineyard field at the Valencia Anchor Station (VAS) site. ELBARA-II, provided by the European Space Agency (ESA) within the scientific program of the SMOS (Soil Moisture and Ocean Salinity) mission, measures multiangular TB data at horizontal and vertical polarization for a range of incidence angles (30-60). Based on a three year data set (2010-2012), several SM retrieval approaches developed for spaceborne missions including AMSR-E (Advanced Microwave Scanning Radiometer for EOS), SMAP (Soil Moisture Active Passive) and SMOS were compared. The approaches include: the Single Channel Algorithm (SCA) for horizontal (SCA-H) and vertical (SCA-V) polarizations, the Dual Channel Algorithm (DCA), the Land Parameter Retrieval Model (LPRM) and two simplified approaches based on statistical regressions (referred to as 'Mattar' and 'Saleh'). Time series of vegetation indices required for three of the algorithms (SCA-H, SCA-V and Mattar) were obtained from MODIS observations. The SM retrievals were evaluated against reference SM values estimated from a multiangular 2-Parameter inversion approach. The results obtained with the current base line algorithms developed for SMAP (SCA-H and -V) are in very good agreement with the reference SM data set derived from the multi-angular observations (R2 around 0.90, RMSE varying between 0.035 and 0.056 m3m3 for several retrieval configurations). This result showed that, provided the relationship between vegetation optical depth and a remotely-sensed vegetation index can be calibrated, the SCA algorithms can provide results very close to those obtained from multi-angular observations in this study area. The approaches based on statistical regressions provided similar results and the best accuracy was obtained with the Saleh methods based on either bi-angular or bipolarization observations (R2 around 0.93, RMSE around 0.035 m3m3). The LPRM and DCA algorithms were found to be slightly less successful in retrieving the 'reference' SM time series (R2 around 0.75, RMSE around 0.055 m3m3). However, the two above approaches have the great advantage of not requiring any model calibrations previous to the SM retrievals.
An Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 Orbiter and Lander
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Queen, Eric M.; Powell, Richard W.; Braun, Robert D.; Cheatwood, F. McNeil; Aguirre, John T.; Sachi, Laura A.; Lyons, Daniel T.
1998-01-01
An Atmospheric Flight Team was formed by the Mars Surveyor Program '01 mission office to develop aerocapture and precision landing testbed simulations and candidate guidance algorithms. Three- and six-degree-of-freedom Mars atmospheric flight simulations have been developed for testing, evaluation, and analysis of candidate guidance algorithms for the Mars Surveyor Program 2001 Orbiter and Lander. These simulations are built around the Program to Optimize Simulated Trajectories. Subroutines were supplied by Atmospheric Flight Team members for modeling the Mars atmosphere, spacecraft control system, aeroshell aerodynamic characteristics, and other Mars 2001 mission specific models. This paper describes these models and their perturbations applied during Monte Carlo analyses to develop, test, and characterize candidate guidance algorithms.
Multiple object tracking using the shortest path faster association algorithm.
Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.
Multiple Object Tracking Using the Shortest Path Faster Association Algorithm
Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322
Exact and heuristic algorithms for Space Information Flow.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng
2018-01-01
Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.
Varma, Niraj; O'Donnell, David; Bassiouny, Mohammed; Ritter, Philippe; Pappone, Carlo; Mangual, Jan; Cantillon, Daniel; Badie, Nima; Thibault, Bernard; Wisnoskey, Brian
2018-02-06
QRS narrowing following cardiac resynchronization therapy with biventricular (BiV) or left ventricular (LV) pacing is likely affected by patient-specific conduction characteristics (PR, qLV, LV-paced propagation interval), making a universal programming strategy likely ineffective. We tested these factors using a novel, device-based algorithm (SyncAV) that automatically adjusts paced atrioventricular delay (default or programmable offset) according to intrinsic atrioventricular conduction. Seventy-five patients undergoing cardiac resynchronization therapy (age 66±11 years; 65% male; 32% with ischemic cardiomyopathy; LV ejection fraction 28±8%; QRS duration 162±16 ms) with intact atrioventricular conduction (PR interval 194±34, range 128-300 ms), left bundle branch block, and optimized LV lead position were studied at implant. QRS duration (QRSd) reduction was compared for the following pacing configurations: nominal simultaneous BiV (Mode I: paced/sensed atrioventricular delay=140/110 ms), BiV+SyncAV with 50 ms offset (Mode II), BiV+SyncAV with offset that minimized QRSd (Mode III), or LV-only pacing+SyncAV with 50 ms offset (Mode IV). The intrinsic QRSd (162±16 ms) was reduced to 142±17 ms (-11.8%) by Mode I, 136±14 ms (-15.6%) by Mode IV, and 132±13 ms (-17.8%) by Mode II. Mode III yielded the shortest overall QRSd (123±12 ms, -23.9% [ P <0.001 versus all modes]) and was the only configuration without QRSd prolongation in any patient. QRS narrowing occurred regardless of QRSd, PR, or LV-paced intervals, or underlying ischemic disease. Post-implant electrical optimization in already well-selected patients with left bundle branch block and optimized LV lead position is facilitated by patient-tailored BiV pacing adjusted to intrinsic atrioventricular timing using an automatic device-based algorithm. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Dynamic programming re-ranking for PPI interactor and pair extraction in full-text articles
2011-01-01
Background Experimentally verified protein-protein interactions (PPIs) cannot be easily retrieved by researchers unless they are stored in PPI databases. The curation of such databases can be facilitated by employing text-mining systems to identify genes which play the interactor role in PPIs and to map these genes to unique database identifiers (interactor normalization task or INT) and then to return a list of interaction pairs for each article (interaction pair task or IPT). These two tasks are evaluated in terms of the area under curve of the interpolated precision/recall (AUC iP/R) score because the order of identifiers in the output list is important for ease of curation. Results Our INT system developed for the BioCreAtIvE II.5 INT challenge achieved a promising AUC iP/R of 43.5% by using a support vector machine (SVM)-based ranking procedure. Using our new re-ranking algorithm, we have been able to improve system performance (AUC iP/R) by 1.84%. Our experimental results also show that with the re-ranked INT results, our unsupervised IPT system can achieve a competitive AUC iP/R of 23.86%, which outperforms the best BC II.5 INT system by 1.64%. Compared to using only SVM ranked INT results, using re-ranked INT results boosts AUC iP/R by 7.84%. Statistical significance t-test results show that our INT/IPT system with re-ranking outperforms that without re-ranking by a statistically significant difference. Conclusions In this paper, we present a new re-ranking algorithm that considers co-occurrence among identifiers in an article to improve INT and IPT ranking results. Combining the re-ranked INT results with an unsupervised approach to find associations among interactors, the proposed method can boost the IPT performance. We also implement score computation using dynamic programming, which is faster and more efficient than traditional approaches. PMID:21342534
Dynamic programming re-ranking for PPI interactor and pair extraction in full-text articles.
Tsai, Richard Tzong-Han; Lai, Po-Ting
2011-02-23
Experimentally verified protein-protein interactions (PPIs) cannot be easily retrieved by researchers unless they are stored in PPI databases. The curation of such databases can be facilitated by employing text-mining systems to identify genes which play the interactor role in PPIs and to map these genes to unique database identifiers (interactor normalization task or INT) and then to return a list of interaction pairs for each article (interaction pair task or IPT). These two tasks are evaluated in terms of the area under curve of the interpolated precision/recall (AUC iP/R) score because the order of identifiers in the output list is important for ease of curation. Our INT system developed for the BioCreAtIvE II.5 INT challenge achieved a promising AUC iP/R of 43.5% by using a support vector machine (SVM)-based ranking procedure. Using our new re-ranking algorithm, we have been able to improve system performance (AUC iP/R) by 1.84%. Our experimental results also show that with the re-ranked INT results, our unsupervised IPT system can achieve a competitive AUC iP/R of 23.86%, which outperforms the best BC II.5 INT system by 1.64%. Compared to using only SVM ranked INT results, using re-ranked INT results boosts AUC iP/R by 7.84%. Statistical significance t-test results show that our INT/IPT system with re-ranking outperforms that without re-ranking by a statistically significant difference. In this paper, we present a new re-ranking algorithm that considers co-occurrence among identifiers in an article to improve INT and IPT ranking results. Combining the re-ranked INT results with an unsupervised approach to find associations among interactors, the proposed method can boost the IPT performance. We also implement score computation using dynamic programming, which is faster and more efficient than traditional approaches.
NASA Technical Reports Server (NTRS)
Dupnick, E.; Wiggins, D.
1980-01-01
The functional specifications, functional design and flow, and the program logic of the GREEDY computer program are described. The GREEDY program is a submodule of the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) program and has been designed as a continuation of the shuttle Mission Payloads (MPLS) program. The MPLS uses input payload data to form a set of feasible payload combinations; from these, GREEDY selects a subset of combinations (a traffic model) so all payloads can be included without redundancy. The program also provides the user a tutorial option so that he can choose an alternate traffic model in case a particular traffic model is unacceptable.
Quantized Overcomplete Expansions: Analysis, Synthesis and Algorithms
1995-07-01
would be in the spirit of the Lempel - Ziv algorithm . The decoder would have to be aware of changes in the dictionary, but depending on the nature of the...37 3.4 A General Vector Compression Algorithm Based on Frames : : : : : : : : : : 40 ii 3.4.1 Design Considerations...x3.3. Along with exploring general properties of matching pursuit, we are interested in its application to compressing data vectors in RN. A general
NASA Astrophysics Data System (ADS)
Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar
2018-07-01
This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.
NASA Astrophysics Data System (ADS)
Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar
2017-07-01
This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.
Analysis of a Multi-Fidelity Surrogate for Handling Real Gas Equations of State
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S.
2017-06-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the detonation products of the explosive must be treated as real gas while the ideal gas equation of state is used for the surrounding air. As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both state equations must be satisfied. One of the most accurate, yet computationally expensive, methods to handle this problem is an algorithm that iterates between both equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work aims to use a multi-fidelity surrogate model to replace this process. A Kriging model is used to produce a curve fit which interpolates selected data from the iterative algorithm using Bayesian statistics. We study the model performance with respect to the iterative method in simulations using a finite volume code. The model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel approach. Also, optimizing the combination of model accuracy and computational speed through the choice of sampling points is explained. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program as a Cooperative Agreement under the Predictive Science Academic Alliance Program under Contract No. DE-NA0002378.
40 CFR 52.379 - Control strategy: PM2.5.
Code of Federal Regulations, 2013 CFR
2013-07-01
... to CAA sections 110(a)(2)(A), (C) only as it related to the PSD program, (D)(ii), (E)(ii), and (J) only as it relates to the PSD program. This conditional approval is contingent upon Connecticut taking... related to the PSD program, (D)(ii), (E)(ii), and (J) only as it relates to the PSD program. This...
40 CFR 52.379 - Control strategy: PM2.5.
Code of Federal Regulations, 2014 CFR
2014-07-01
... to CAA sections 110(a)(2)(A), (C) only as it related to the PSD program, (D)(ii), (E)(ii), and (J) only as it relates to the PSD program. This conditional approval is contingent upon Connecticut taking... related to the PSD program, (D)(ii), (E)(ii), and (J) only as it relates to the PSD program. This...
Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida
By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...
MOLA II Laser Transmitter Calibration and Performance. 1.2
NASA Technical Reports Server (NTRS)
Afzal, Robert S.; Smith, David E. (Technical Monitor)
1997-01-01
The goal of the document is to explain the algorithm for determining the laser output energy from the telemetry data within the return packets from MOLA II. A simple algorithm is developed to convert the raw start detector data into laser energy, measured in millijoules. This conversion is dependent on three variables, start detector counts, array heat sink temperature and start detector temperature. All these values are contained within the return packets. The conversion is applied to the GSFC Thermal Vacuum data as well as the in-space data to date and shows good correlation.
RACER: Effective Race Detection Using AspectJ
NASA Technical Reports Server (NTRS)
Bodden, Eric; Havelund, Klaus
2008-01-01
The limits of coding with joint constraints on detected and undetected error rates Programming errors occur frequently in large software systems, and even more so if these systems are concurrent. In the past, researchers have developed specialized programs to aid programmers detecting concurrent programming errors such as deadlocks, livelocks, starvation and data races. In this work we propose a language extension to the aspect-oriented programming language AspectJ, in the form of three new built-in pointcuts, lock(), unlock() and may be Shared(), which allow programmers to monitor program events where locks are granted or handed back, and where values are accessed that may be shared amongst multiple Java threads. We decide thread-locality using a static thread-local objects analysis developed by others. Using the three new primitive pointcuts, researchers can directly implement efficient monitoring algorithms to detect concurrent programming errors online. As an example, we expose a new algorithm which we call RACER, an adoption of the well-known ERASER algorithm to the memory model of Java. We implemented the new pointcuts as an extension to the Aspect Bench Compiler, implemented the RACER algorithm using this language extension and then applied the algorithm to the NASA K9 Rover Executive. Our experiments proved our implementation very effective. In the Rover Executive RACER finds 70 data races. Only one of these races was previously known.We further applied the algorithm to two other multi-threaded programs written by Computer Science researchers, in which we found races as well.
A scalable parallel algorithm for multiple objective linear programs
NASA Technical Reports Server (NTRS)
Wiecek, Malgorzata M.; Zhang, Hong
1994-01-01
This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.
Testing Algorithmic Skills in Traditional and Non-Traditional Programming Environments
ERIC Educational Resources Information Center
Csernoch, Mária; Biró, Piroska; Máth, János; Abari, Kálmán
2015-01-01
The Testing Algorithmic and Application Skills (TAaAS) project was launched in the 2011/2012 academic year to test first year students of Informatics, focusing on their algorithmic skills in traditional and non-traditional programming environments, and on the transference of their knowledge of Informatics from secondary to tertiary education. The…
Two algorithms for neural-network design and training with application to channel equalization.
Sweatman, C Z; Mulgrew, B; Gibson, G J
1998-01-01
We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.
Determination of the Underlying Task Scheduling Algorithm for an Ada Runtime System
1989-12-01
was also curious as to how well I could model the test cases with Ada programs . In particular, I wanted to see whether I could model the equal arrival...parameter relationshis=s required to detect the execution of individual algorithms. These test cases were modeled using Ada programs . Then, the...results were analyzed to determine whether the Ada programs were capable of revealing the task scheduling algorithm used by the Ada run-time system. This
G STL: the geostatistical template library in C++
NASA Astrophysics Data System (ADS)
Remy, Nicolas; Shtuka, Arben; Levy, Bruno; Caers, Jef
2002-10-01
The development of geostatistics has been mostly accomplished by application-oriented engineers in the past 20 years. The focus on concrete applications gave birth to many algorithms and computer programs designed to address different issues, such as estimating or simulating a variable while possibly accounting for secondary information such as seismic data, or integrating geological and geometrical data. At the core of any geostatistical data integration methodology is a well-designed algorithm. Yet, despite their obvious differences, all these algorithms share many commonalities on which to build a geostatistics programming library, lest the resulting library is poorly reusable and difficult to expand. Building on this observation, we design a comprehensive, yet flexible and easily reusable library of geostatistics algorithms in C++. The recent advent of the generic programming paradigm allows us elegantly to express the commonalities of the geostatistical algorithms into computer code. Generic programming, also referred to as "programming with concepts", provides a high level of abstraction without loss of efficiency. This last point is a major gain over object-oriented programming which often trades efficiency for abstraction. It is not enough for a numerical library to be reusable, it also has to be fast. Because generic programming is "programming with concepts", the essential step in the library design is the careful identification and thorough definition of these concepts shared by most geostatistical algorithms. Building on these definitions, a generic and expandable code can be developed. To show the advantages of such a generic library, we use G STL to build two sequential simulation programs working on two different types of grids—a surface with faults and an unstructured grid—without requiring any change to the G STL code.
Onboard spectral imager data processor
NASA Astrophysics Data System (ADS)
Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.
1999-10-01
Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.
NASA Astrophysics Data System (ADS)
Zhong, Shuya; Pantelous, Athanasios A.; Beer, Michael; Zhou, Jian
2018-05-01
Offshore wind farm is an emerging source of renewable energy, which has been shown to have tremendous potential in recent years. In this blooming area, a key challenge is that the preventive maintenance of offshore turbines should be scheduled reasonably to satisfy the power supply without failure. In this direction, two significant goals should be considered simultaneously as a trade-off. One is to maximise the system reliability and the other is to minimise the maintenance related cost. Thus, a non-linear multi-objective programming model is proposed including two newly defined objectives with thirteen families of constraints suitable for the preventive maintenance of offshore wind farms. In order to solve our model effectively, the nondominated sorting genetic algorithm II, especially for the multi-objective optimisation is utilised and Pareto-optimal solutions of schedules can be obtained to offer adequate support to decision-makers. Finally, an example is given to illustrate the performances of the devised model and algorithm, and explore the relationships of the two targets with the help of a contrast model.
Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery
Kim, Daeun; Haldar, Justin P.
2016-01-01
This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368
Analysis of estimation algorithms for CDTI and CAS applications
NASA Technical Reports Server (NTRS)
Goka, T.
1985-01-01
Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.
40 CFR 147.1600 - State-administered program-Class II wells.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) WATER PROGRAMS (CONTINUED) STATE, TRIBAL, AND EPA-ADMINISTERED UNDERGROUND INJECTION CONTROL PROGRAMS New Mexico § 147.1600 State-administered program—Class II wells. The UIC program for Class II wells in the State of New Mexico, except for those on Indian lands, is the program administered by the New...
20 CFR 628.530 - Referrals of participants to non-title II programs.
Code of Federal Regulations, 2010 CFR
2010-04-01
... programs. 628.530 Section 628.530 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR PROGRAMS UNDER TITLE II OF THE JOB TRAINING PARTNERSHIP ACT Program Design Requirements for Programs Under Title II of the Job Training Partnership Act § 628.530 Referrals of participants to non...
Recordon-Pinson, Patricia; Soulié, Cathia; Flandre, Philippe; Descamps, Diane; Lazrek, Mouna; Charpentier, Charlotte; Montes, Brigitte; Trabaud, Mary-Anne; Cottalorda, Jacqueline; Schneider, Véronique; Morand-Joubert, Laurence; Tamalet, Catherine; Desbois, Delphine; Macé, Muriel; Ferré, Virginie; Vabret, Astrid; Ruffault, Annick; Pallier, Coralie; Raymond, Stéphanie; Izopet, Jacques; Reynes, Jacques; Marcelin, Anne-Geneviève; Masquelier, Bernard
2010-08-01
Genotypic algorithms for prediction of HIV-1 coreceptor usage need to be evaluated in a clinical setting. We aimed at studying (i) the correlation of genotypic prediction of coreceptor use in comparison with a phenotypic assay and (ii) the relationship between genotypic prediction of coreceptor use at baseline and the virological response (VR) to a therapy including maraviroc (MVC). Antiretroviral-experienced patients were included in the MVC Expanded Access Program if they had an R5 screening result with Trofile (Monogram Biosciences). V3 loop sequences were determined at screening, and coreceptor use was predicted using 13 genotypic algorithms or combinations of algorithms. Genotypic predictions were compared to Trofile; dual or mixed (D/M) variants were considered as X4 variants. Both genotypic and phenotypic results were obtained for 189 patients at screening, with 54 isolates scored as X4 or D/M and 135 scored as R5 with Trofile. The highest sensitivity (59.3%) for detection of X4 was obtained with the Geno2pheno algorithm, with a false-positive rate set up at 10% (Geno2pheno10). In the 112 patients receiving MVC, a plasma viral RNA load of <50 copies/ml was obtained in 68% of cases at month 6. In multivariate analysis, the prediction of the X4 genotype at baseline with the Geno2pheno10 algorithm including baseline viral load and CD4 nadir was independently associated with a worse VR at months 1 and 3. The baseline weighted genotypic sensitivity score was associated with VR at month 6. There were strong arguments in favor of using genotypic coreceptor use assays for determining which patients would respond to CCR5 antagonist.
A space-efficient algorithm for local similarities.
Huang, X Q; Hardison, R C; Miller, W
1990-10-01
Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.
Distributed query plan generation using multiobjective genetic algorithm.
Panicker, Shina; Kumar, T V Vijay
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.
Distributed Query Plan Generation Using Multiobjective Genetic Algorithm
Panicker, Shina; Vijay Kumar, T. V.
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513
40 CFR 52.1019 - Identification of plan-conditional approval.
Code of Federal Regulations, 2013 CFR
2013-07-01
...), (C) only as it relates to the PSD program, (D)(i)(II) only as it relates to the PSD program, (D)(ii), (E)(ii), and (J) only as it relates to the PSD program. This conditional approval is contingent upon... conditionally approved for CAA elements 110(a)(2)(A), (C) only as it relates to the PSD program, (D)(i)(II) only...
40 CFR 52.1019 - Identification of plan-conditional approval.
Code of Federal Regulations, 2014 CFR
2014-07-01
...), (C) only as it relates to the PSD program, (D)(i)(II) only as it relates to the PSD program, (D)(ii), (E)(ii), and (J) only as it relates to the PSD program. This conditional approval is contingent upon... conditionally approved for CAA elements 110(a)(2)(A), (C) only as it relates to the PSD program, (D)(i)(II) only...
The Icarus challenge - Predicting vulnerability to climate change using an algorithm-based species’ trait approachHenry Lee II, Christina Folger, Deborah A. Reusser, Patrick Clinton, and Rene Graham1 U.S. EPA, Western Ecology Division, Newport, OR USA E-mail: lee.henry@ep...
By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...
A generalized global alignment algorithm.
Huang, Xiaoqiu; Chao, Kun-Mao
2003-01-22
Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.
NPLOT: an Interactive Plotting Program for NASTRAN Finite Element Models
NASA Technical Reports Server (NTRS)
Jones, G. K.; Mcentire, K. J.
1985-01-01
The NPLOT (NASTRAN Plot) is an interactive computer graphics program for plotting undeformed and deformed NASTRAN finite element models. Developed at NASA's Goddard Space Flight Center, the program provides flexible element selection and grid point, ASET and SPC degree of freedom labelling. It is easy to use and provides a combination menu and command driven user interface. NPLOT also provides very fast hidden line and haloed line algorithms. The hidden line algorithm in NPLOT proved to be both very accurate and several times faster than other existing hidden line algorithms. A fast spatial bucket sort and horizon edge computation are used to achieve this high level of performance. The hidden line and the haloed line algorithms are the primary features that make NPLOT different from other plotting programs.
On program restructuring, scheduling, and communication for parallel processor systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polychronopoulos, Constantine D.
1986-08-01
This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less
Trident II (D-5) Sea Launched Ballistic Missile UGM 133A (Trident II Missile)
2015-12-01
Selected Acquisition Report ( SAR ) RCS: DD-A&T(Q&A)823-178 Trident II (D-5) Sea-Launched Ballistic Missile UGM 133A (Trident II Missile) As of FY...December 2015 SAR March 17, 2016 12:10:33 UNCLASSIFIED 2 Table of Contents Common Acronyms and Abbreviations for MDAP Programs 3 Program...Acquisition Unit Cost Trident II Missile December 2015 SAR March 17, 2016 12:10:33 UNCLASSIFIED 3 PB - President’s Budget PE - Program Element PEO - Program
Angular Superresolution for a Scanning Antenna with Simulated Complex Scatterer-Type Targets
2002-05-01
Approved for public release; distribution unlimited. The Scan- MUSIC (MUltiple SIgnal Classification), or SMUSIC, algorithm was developed by the Millimeter...with the use of a single rotatable sensor scanning in an angular region of interest. This algorithm has been adapted and extended from the MUSIC ...simulation. Abstract ii iii Contents 1. Introduction 1 2. Extension of the MUSIC Algorithm for Scanning Antenna 2 2.1 Subvector Averaging Method
Fast Fourier Transform algorithm design and tradeoffs
NASA Technical Reports Server (NTRS)
Kamin, Ray A., III; Adams, George B., III
1988-01-01
The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1986-01-01
A hypothetical turbofan engine simplified simulation with a multivariable control and sensor failure detection, isolation, and accommodation logic (HYTESS II) is presented. The digital program, written in FORTRAN, is self-contained, efficient, realistic and easily used. Simulated engine dynamics were developed from linearized operating point models. However, essential nonlinear effects are retained. The simulation is representative of the hypothetical, low bypass ratio turbofan engine with an advanced control and failure detection logic. Included is a description of the engine dynamics, the control algorithm, and the sensor failure detection logic. Details of the simulation including block diagrams, variable descriptions, common block definitions, subroutine descriptions, and input requirements are given. Example simulation results are also presented.
1986-09-01
receive much benefit . [] 2. The MAMP program prioritization algorithm is the responsibility of TRADOC. This study analyzed the perceived deficiencies...C o I 0 w m a 0 0 - 0.. -W >. m a -a ZZ II w u3 c w Ir 0 ccD I j cnC 0 o w a w W a- Im 3 El0 1>1- - < OUU4 0 .0.. 3.0 I * T- ui l. IT w3 >0 . I- *, wWE ...34Related" else if deftpe - 3 then print "Non-Materiel" else if def tqjpe - 4 then print " Health Service’ alse print skip I line printcolum-81
NASA Astrophysics Data System (ADS)
Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Ledoux, X.; Laurent, B.; Thomas, J.-C.; Clerc, T.; Desmezières, V.; Dupuis, M.; Madeline, A.; Dessay, E.; Grinyer, G. F.; Grinyer, J.; Menard, N.; Porée, F.; Achouri, L.; Delaunay, F.; Parlog, M.
2018-07-01
Double differential neutron spectra (energy, angle) originating from a thick natCu target bombarded by a 12 MeV/nucleon 36S16+ beam were measured by the activation method and the Time-of-flight technique at the Grand Accélérateur National d'Ions Lourds (GANIL). A neutron spectrum unfolding algorithm combining the SAND-II iterative method and Monte-Carlo techniques was developed for the analysis of the activation results that cover a wide range of neutron energies. It was implemented into a graphical user interface program, called GanUnfold. The experimental neutron spectra are compared to Monte-Carlo simulations performed using the PHITS and FLUKA codes.
A Partitioning and Bounded Variable Algorithm for Linear Programming
ERIC Educational Resources Information Center
Sheskin, Theodore J.
2006-01-01
An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…
User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.
1988-01-01
Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.
Algorithm Building and Learning Programming Languages Using a New Educational Paradigm
NASA Astrophysics Data System (ADS)
Jain, Anshul K.; Singhal, Manik; Gupta, Manu Sheel
2011-08-01
This research paper presents a new concept of using a single tool to associate syntax of various programming languages, algorithms and basic coding techniques. A simple framework has been programmed in Python that helps students learn skills to develop algorithms, and implement them in various programming languages. The tool provides an innovative and a unified graphical user interface for development of multimedia objects, educational games and applications. It also aids collaborative learning amongst students and teachers through an integrated mechanism based on Remote Procedure Calls. The paper also elucidates an innovative method for code generation to enable students to learn the basics of programming languages using drag-n-drop methods for image objects.
NASA Astrophysics Data System (ADS)
Rout, Sachindra K.; Choudhury, Balaji K.; Sahoo, Ranjit K.; Sarangi, Sunil K.
2014-07-01
The modeling and optimization of a Pulse Tube Refrigerator is a complicated task, due to its complexity of geometry and nature. The aim of the present work is to optimize the dimensions of pulse tube and regenerator for an Inertance-Type Pulse Tube Refrigerator (ITPTR) by using Response Surface Methodology (RSM) and Non-Sorted Genetic Algorithm II (NSGA II). The Box-Behnken design of the response surface methodology is used in an experimental matrix, with four factors and two levels. The diameter and length of the pulse tube and regenerator are chosen as the design variables where the rest of the dimensions and operating conditions of the ITPTR are constant. The required output responses are the cold head temperature (Tcold) and compressor input power (Wcomp). Computational fluid dynamics (CFD) have been used to model and solve the ITPTR. The CFD results agreed well with those of the previously published paper. Also using the results from the 1-D simulation, RSM is conducted to analyse the effect of the independent variables on the responses. To check the accuracy of the model, the analysis of variance (ANOVA) method has been used. Based on the proposed mathematical RSM models a multi-objective optimization study, using the Non-sorted genetic algorithm II (NSGA-II) has been performed to optimize the responses.
Development and validation of an online interactive, multimedia wound care algorithms program.
Beitz, Janice M; van Rijswijk, Lia
2012-01-01
To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide patient care.
Programming Deep Brain Stimulation for Tremor and Dystonia: The Toronto Western Hospital Algorithms.
Picillo, Marina; Lozano, Andres M; Kou, Nancy; Munhoz, Renato Puppi; Fasano, Alfonso
2016-01-01
Deep brain stimulation (DBS) is an effective treatment for essential tremor (ET) and dystonia. After surgery, a number of extensive programming sessions are performed, mainly relying on neurologist's personal experience as no programming guidelines have been provided so far, with the exception of recommendations provided by groups of experts. Finally, fewer information is available for the management of DBS in ET and dystonia compared with Parkinson's disease. Our aim is to review the literature on initial and follow-up DBS programming procedures for ET and dystonia and integrate the results with our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We conducted a literature search of PubMed from inception to July 2014 with the keywords "balance", "bradykinesia", "deep brain stimulation", "dysarthria", "dystonia", "gait disturbances", "initial programming", "loss of benefit", "micrographia", "speech", "speech difficulties" and "tremor". Seventy-six papers were considered for this review. Based on the literature review and our experience at TWH, we refined three algorithms for management of ET, including: (1) initial programming, (2) management of balance and speech issues and (3) loss of stimulation benefit. We also depicted algorithms for the management of dystonia, including: (1) initial programming and (2) management of stimulation-induced hypokinesia (shuffling gait, micrographia and speech impairment). We propose five algorithms tailored to an individualized approach to managing ET and dystonia patients with DBS. We encourage the application of these algorithms to supplement current standards of care in established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.
User's manual for the BNW-II optimization code for dry/wet-cooled power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun, D.J.; Bamberger, J.A.; Braun, D.J.
1978-05-01
This volume provides a listing of the BNW-II dry/wet ammonia heat rejection optimization code and is an appendix to Volume I which gives a narrative description of the code's algorithms as well as logic, input and output information.
Competitive evaluation of failure detection algorithms for strapdown redundant inertial instruments
NASA Technical Reports Server (NTRS)
Wilcox, J. C.
1973-01-01
Algorithms for failure detection, isolation, and correction of redundant inertial instruments in the strapdown dodecahedron configuration are competitively evaluated in a digital computer simulation that subjects them to identical environments. Their performance is compared in terms of orientation and inertial velocity errors and in terms of missed and false alarms. The algorithms appear in the simulation program in modular form, so that they may be readily extracted for use elsewhere. The simulation program and its inputs and outputs are described. The algorithms, along with an eight algorithm that was not simulated, also compared analytically to show the relationships among them.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
Predicting the survival of diabetes using neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Data mining techniques at the present time are used in predicting diseases of health care industries. Neural Network is one among the prevailing method in data mining techniques of an intelligent field for predicting diseases in health care industries. This paper presents a study on the prediction of the survival of diabetes diseases using different learning algorithms from the supervised learning algorithms of neural network. Three learning algorithms are considered in this study: (i) The levenberg-marquardt learning algorithm (ii) The Bayesian regulation learning algorithm and (iii) The scaled conjugate gradient learning algorithm. The network is trained using the Pima Indian Diabetes Dataset with the help of MATLAB R2014(a) software. The performance of each algorithm is further discussed through regression analysis. The prediction accuracy of the best algorithm is further computed to validate the accurate prediction
Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code
ERIC Educational Resources Information Center
Taherkhani, Ahmad; Malmi, Lauri
2013-01-01
In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…
Flynn, P M; McCann, J T; Fairbank, J A
1995-05-01
Substance abuse treatment clients often present other severe mental health problems that affect treatment outcomes. Hence, screening and assessment for psychological distress and personality disorder are an important part of effective treatment, discharge, and aftercare planning. The Millon Clinical Multiaxial Inventory-II (MCMI-II) frequently is used for this purpose. In this paper, several issues of concern to MCMI-II users are addressed. These include the extent to which MCMI-II scales correspond to DSM-III-R disorders; overdiagnosis of disorders using the MCMI-II; accuracy of MCMI-II diagnostic cut-off scores; and the clinical utility of MCMI-II diagnostic algorithms. Approaches to addressing these issues are offered.
Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Huang, H. K.
1989-05-01
A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.
NASA Astrophysics Data System (ADS)
Chen, R. J.; Wang, M.; Yan, X. L.; Yang, Q.; Lam, Y. H.; Yang, L.; Zhang, Y. H.
2017-12-01
The periodic signals tracking algorithm has been used to determine the revolution times of ions stored in storage rings in isochronous mass spectrometry (IMS) experiments. It has been a challenge to perform real-time data analysis by using the periodic signals tracking algorithm in the IMS experiments. In this paper, a parallelization scheme of the periodic signals tracking algorithm is introduced and a new program is developed. The computing time of data analysis can be reduced by a factor of ∼71 and of ∼346 by using our new program on Tesla C1060 GPU and Tesla K20c GPU, compared to using old program on Xeon E5540 CPU. We succeed in performing real-time data analysis for the IMS experiments by using the new program on Tesla K20c GPU.
NASA Technical Reports Server (NTRS)
Dennehy, Cornelius J.; VanZwieten, Tannen S.; Hanson, Curtis E.; Wall, John H.; Miller, Chris J.; Gilligan, Eric T.; Orr, Jeb S.
2014-01-01
The Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an adaptive augmenting control (AAC) algorithm for launch vehicles that improves robustness and performance on an as-needed basis by adapting a classical control algorithm to unexpected environments or variations in vehicle dynamics. This was baselined as part of the Space Launch System (SLS) flight control system. The NASA Engineering and Safety Center (NESC) was asked to partner with the SLS Program and the Space Technology Mission Directorate (STMD) Game Changing Development Program (GCDP) to flight test the AAC algorithm on a manned aircraft that can achieve a high level of dynamic similarity to a launch vehicle and raise the technology readiness of the algorithm early in the program. This document reports the outcome of the NESC assessment.
NASA Technical Reports Server (NTRS)
Nguyen, Tien Manh
1989-01-01
MT's algorithm was developed as an aid in the design of space telecommunications systems when utilized with simultaneous range/command/telemetry operations. This algorithm provides selection of modulation indices for: (1) suppression of undesired signals to achieve desired link performance margins and/or to allow for a specified performance degradation in the data channel (command/telemetry) due to the presence of undesired signals (interferers); and (2) optimum power division between the carrier, the range, and the data channel. A software program using this algorithm was developed for use with MathCAD software. This software program, called the MT program, provides the computation of optimum modulation indices for all possible cases that are recommended by the Consultative Committee on Space Data System (CCSDS) (with emphasis on the squarewave, NASA/JPL ranging system).
2013-01-02
intensity data from the SNP array were normalized using the Affymetrix GeneChip Targeted Genotyping Analysis Software ( GTGS ). To assess robustness of SNP...calls, genotypes were called using three algorithms: (i) GTGS , (ii) illuminus (27), and (iii) a heuristic algorithm based on discrete cutoffs of
USDA-ARS?s Scientific Manuscript database
Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...
NASA Technical Reports Server (NTRS)
Teren, F.
1977-01-01
Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.
Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.
Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard
2012-06-07
We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.
Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.
Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou
2016-01-01
For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.
National dosimetric audit network finds discrepancies in AAA lung inhomogeneity corrections.
Dunn, Leon; Lehmann, Joerg; Lye, Jessica; Kenny, John; Kron, Tomas; Alves, Andrew; Cole, Andrew; Zifodya, Jackson; Williams, Ivan
2015-07-01
This work presents the Australian Clinical Dosimetry Service's (ACDS) findings of an investigation of systematic discrepancies between treatment planning system (TPS) calculated and measured audit doses. Specifically, a comparison between the Anisotropic Analytic Algorithm (AAA) and other common dose-calculation algorithms in regions downstream (≥2cm) from low-density material in anthropomorphic and slab phantom geometries is presented. Two measurement setups involving rectilinear slab-phantoms (ACDS Level II audit) and anthropomorphic geometries (ACDS Level III audit) were used in conjunction with ion chamber (planar 2D array and Farmer-type) measurements. Measured doses were compared to calculated doses for a variety of cases, with and without the presence of inhomogeneities and beam-modifiers in 71 audits. Results demonstrate a systematic AAA underdose with an average discrepancy of 2.9 ± 1.2% when the AAA algorithm is implemented in regions distal from lung-tissue interfaces, when lateral beams are used with anthropomorphic phantoms. This systemic discrepancy was found for all Level III audits of facilities using the AAA algorithm. This discrepancy is not seen when identical measurements are compared for other common dose-calculation algorithms (average discrepancy -0.4 ± 1.7%), including the Acuros XB algorithm also available with the Eclipse TPS. For slab phantom geometries (Level II audits), with similar measurement points downstream from inhomogeneities this discrepancy is also not seen. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Program Activity/Training Plans. STIP II (Skill Training Improvement Programs Round II).
ERIC Educational Resources Information Center
Los Angeles Community Coll. District, CA.
Detailed operational guidelines, training objectives, and learning activities are provided for the Los Angeles Community College District's Skill Training Improvement Programs (STIP II), which are designed to train students for immediate employment. The first of four reports covers Los Angeles Southwest College's computer programming trainee…
ASTEP user's guide and software documentation
NASA Technical Reports Server (NTRS)
Gliniewicz, A. S.; Lachowski, H. M.; Pace, W. H., Jr.; Salvato, P., Jr.
1974-01-01
The Algorithm Simulation Test and Evaluation Program (ASTEP) is a modular computer program developed for the purpose of testing and evaluating methods of processing remotely sensed multispectral scanner earth resources data. ASTEP is written in FORTRAND V on the UNIVAC 1110 under the EXEC 8 operating system and may be operated in either a batch or interactive mode. The program currently contains over one hundred subroutines consisting of data classification and display algorithms, statistical analysis algorithms, utility support routines, and feature selection capability. The current program can accept data in LARSC1, LARSC2, ERTS, and Universal formats, and can output processed image or data tapes in Universal format.
Moore, J H
1995-06-01
A genetic algorithm for instrumentation control and optimization was developed using the LabVIEW graphical programming environment. The usefulness of this methodology for the optimization of a closed loop control instrument is demonstrated with minimal complexity and the programming is presented in detail to facilitate its adaptation to other LabVIEW applications. Closed loop control instruments have variety of applications in the biomedical sciences including the regulation of physiological processes such as blood pressure. The program presented here should provide a useful starting point for those wishing to incorporate genetic algorithm approaches to LabVIEW mediated optimization of closed loop control instruments.
Tsuruta, S; Misztal, I; Strandén, I
2001-05-01
Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
An intelligent allocation algorithm for parallel processing
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Ananthram, Kishan G.
1988-01-01
The problem of allocating nodes of a program graph to processors in a parallel processing architecture is considered. The algorithm is based on critical path analysis, some allocation heuristics, and the execution granularity of nodes in a program graph. These factors, and the structure of interprocessor communication network, influence the allocation. To achieve realistic estimations of the executive durations of allocations, the algorithm considers the fact that nodes in a program graph have to communicate through varying numbers of tokens. Coarse and fine granularities have been implemented, with interprocessor token-communication duration, varying from zero up to values comparable to the execution durations of individual nodes. The effect on allocation of communication network structures is demonstrated by performing allocations for crossbar (non-blocking) and star (blocking) networks. The algorithm assumes the availability of as many processors as it needs for the optimal allocation of any program graph. Hence, the focus of allocation has been on varying token-communication durations rather than varying the number of processors. The algorithm always utilizes as many processors as necessary for the optimal allocation of any program graph, depending upon granularity and characteristics of the interprocessor communication network.
Classification of case-II waters using hyperspectral (HICO) data over North Indian Ocean
NASA Astrophysics Data System (ADS)
Srinivasa Rao, N.; Ramarao, E. P.; Srinivas, K.; Deka, P. C.
2016-05-01
State of the art Ocean color algorithms are proven for retrieving the ocean constituents (chlorophyll-a, CDOM and Suspended Sediments) in case-I waters. However, these algorithms could not perform well at case-II waters because of the optical complexity. Hyperspectral data is found to be promising to classify the case-II waters. The aim of this study is to propose the spectral bands for future Ocean color sensors to classify the case-II waters. Study has been performed with Rrs's of HICO at estuaries of the river Indus and GBM of North Indian Ocean. Appropriate field samples are not available to validate and propose empirical models to retrieve concentrations. The sensor HICO is not currently operational to plan validation exercise. Aqua MODIS data at case-I and Case-II waters are used as complementary to in- situ. Analysis of Spectral reflectance curves suggests the band ratios of Rrs 484 nm and Rrs 581 nm, Rrs 490 nm and Rrs 426 nm to classify the Chlorophyll -a and CDOM respectively. Rrs 610 nm gives the best scope for suspended sediment retrieval. The work suggests the need for ocean color sensors with central wavelength's of 426, 484, 490, 581 and 610 nm to estimate the concentrations of Chl-a, Suspended Sediments and CDOM in case-II waters.
Genomes to natural products PRediction Informatics for Secondary Metabolomes (PRISM)
Skinnider, Michael A.; Dejong, Chris A.; Rees, Philip N.; Johnston, Chad W.; Li, Haoxin; Webster, Andrew L. H.; Wyatt, Morgan A.; Magarvey, Nathan A.
2015-01-01
Microbial natural products are an invaluable source of evolved bioactive small molecules and pharmaceutical agents. Next-generation and metagenomic sequencing indicates untapped genomic potential, yet high rediscovery rates of known metabolites increasingly frustrate conventional natural product screening programs. New methods to connect biosynthetic gene clusters to novel chemical scaffolds are therefore critical to enable the targeted discovery of genetically encoded natural products. Here, we present PRISM, a computational resource for the identification of biosynthetic gene clusters, prediction of genetically encoded nonribosomal peptides and type I and II polyketides, and bio- and cheminformatic dereplication of known natural products. PRISM implements novel algorithms which render it uniquely capable of predicting type II polyketides, deoxygenated sugars, and starter units, making it a comprehensive genome-guided chemical structure prediction engine. A library of 57 tailoring reactions is leveraged for combinatorial scaffold library generation when multiple potential substrates are consistent with biosynthetic logic. We compare the accuracy of PRISM to existing genomic analysis platforms. PRISM is an open-source, user-friendly web application available at http://magarveylab.ca/prism/. PMID:26442528
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.
Automatic Program Synthesis Reports.
ERIC Educational Resources Information Center
Biermann, A. W.; And Others
Some of the major results of future goals of an automatic program synthesis project are described in the two papers that comprise this document. The first paper gives a detailed algorithm for synthesizing a computer program from a trace of its behavior. Since the algorithm involves a search, the length of time required to do the synthesis of…
1978-12-01
Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution
Reducing the Volume of NASA Earth-Science Data
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre
2010-01-01
A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.
Status of the calibration and alignment framework at the Belle II experiment
NASA Astrophysics Data System (ADS)
Dossett, D.; Sevior, M.; Ritter, M.; Kuhr, T.; Bilka, T.; Yaschenko, S.;
2017-10-01
The Belle II detector at the Super KEKB e+e-collider plans to take first collision data in 2018. The monetary and CPU time costs associated with storing and processing the data mean that it is crucial for the detector components at Belle II to be calibrated quickly and accurately. A fast and accurate calibration system would allow the high level trigger to increase the efficiency of event selection, and can give users analysis-quality reconstruction promptly. A flexible framework to automate the fast production of calibration constants is being developed in the Belle II Analysis Software Framework (basf2). Detector experts only need to create two components from C++ base classes in order to use the automation system. The first collects data from Belle II event data files and outputs much smaller files to pass to the second component. This runs the main calibration algorithm to produce calibration constants ready for upload into the conditions database. A Python framework coordinates the input files, order of processing, and submission of jobs. Splitting the operation into collection and algorithm processing stages allows the framework to optionally parallelize the collection stage on a batch system.
Highly Efficient Multilayer Thermoelectric Devices
NASA Technical Reports Server (NTRS)
Boufelfel, Ali
2006-01-01
Multilayer thermoelectric devices now at the prototype stage of development exhibit a combination of desirable characteristics, including high figures of merit and high performance/cost ratios. These devices are capable of producing temperature differences of the order of 50 K in operation at or near room temperature. A solvent-free batch process for mass production of these state-of-the-art thermoelectric devices has also been developed. Like prior thermoelectric devices, the present ones have commercial potential mainly by virtue of their utility as means of controlled cooling (and/or, in some cases, heating) of sensors, integrated circuits, and temperature-critical components of scientific instruments. The advantages of thermoelectric devices for such uses include no need for circulating working fluids through or within the devices, generation of little if any noise, and high reliability. The disadvantages of prior thermoelectric devices include high power consumption and relatively low coefficients of performance. The present development program was undertaken in the hope of reducing the magnitudes of the aforementioned disadvantages and, especially, obtaining higher figures of merit for operation at and near room temperature. Accomplishments of the program thus far include development of an algorithm to estimate the heat extracted by, and the maximum temperature drop produced by, a thermoelectric device; solution of the problem of exchange of heat between a thermoelectric cooler and a water-cooled copper block; retrofitting of a vacuum chamber for depositing materials by sputtering; design of masks; and fabrication of multilayer thermoelectric devices of two different designs, denoted I and II. For both the I and II designs, the thicknesses of layers are of the order of nanometers. In devices of design I, nonconsecutive semiconductor layers are electrically connected in series. Devices of design II contain superlattices comprising alternating electron-acceptor (p)-doped and electron-donor (n)-doped, nanometer- thick semiconductor layers.
Development and Testing of the VAHIRR Radar Product
NASA Technical Reports Server (NTRS)
Barrett, Joe III; Miller, Juli; Charnasky, Debbie; Gillen, Robert; Lafosse, Richard; Hoeth, Brian; Hood, Doris; McNamara, Todd
2008-01-01
Lightning Launch Commit Criteria (LLCC) and Flight Rules (FR) are used for launches and landings at government and commercial spaceports. They are designed to avoid natural and triggered lightning strikes to space vehicles, which can endanger the vehicle, payload, and general public. The previous LLCC and FR were shown to be overly restrictive, potentially leading to costly launch delays and scrubs. A radar algorithm called Volume Averaged Height Integrated Radar Reflectivity (VAHIRR), along with new LLCC and FR for anvil clouds, were developed using data collected by the Airborne Field Mill II research program. VAHIRR is calculated at every horizontal position in the coverage area of the radar and can be displayed similar to a two-dimensional derived reflectivity product, such as composite reflectivity or echo tops. It is the arithmetic product of two quantities not currently generated by the Weather Surveillance Radar 1988 Doppler (WSR-88D): a volume average of the reflectivity measured in dBZ and the average cloud thickness based on the average echo top height and base height. This presentation will describe the VAHIRR algorithm, and then explain how the VAHIRR radar product was implemented and tested on a clone of the National Weather Service's (NWS) Open Radar Product Generator (ORPG-clone). The VAHIRR radar product was then incorporated into the Advanced Weather Interactive Processing System (AWIPS), to make it more convenient for weather forecasters to utilize. Finally, the reliability of the VAHIRR radar product was tested with real-time level II radar data from the WSR-88D NWS Melbourne radar.
NASA Astrophysics Data System (ADS)
Prasad, Ramendra; Deo, Ravinesh C.; Li, Yan; Maraseni, Tek
2017-11-01
Forecasting streamflow is vital for strategically planning, utilizing and redistributing water resources. In this paper, a wavelet-hybrid artificial neural network (ANN) model integrated with iterative input selection (IIS) algorithm (IIS-W-ANN) is evaluated for its statistical preciseness in forecasting monthly streamflow, and it is then benchmarked against M5 Tree model. To develop hybrid IIS-W-ANN model, a global predictor matrix is constructed for three local hydrological sites (Richmond, Gwydir, and Darling River) in Australia's agricultural (Murray-Darling) Basin. Model inputs comprised of statistically significant lagged combination of streamflow water level, are supplemented by meteorological data (i.e., precipitation, maximum and minimum temperature, mean solar radiation, vapor pressure and evaporation) as the potential model inputs. To establish robust forecasting models, iterative input selection (IIS) algorithm is applied to screen the best data from the predictor matrix and is integrated with the non-decimated maximum overlap discrete wavelet transform (MODWT) applied on the IIS-selected variables. This resolved the frequencies contained in predictor data while constructing a wavelet-hybrid (i.e., IIS-W-ANN and IIS-W-M5 Tree) model. Forecasting ability of IIS-W-ANN is evaluated via correlation coefficient (r), Willmott's Index (WI), Nash-Sutcliffe Efficiency (ENS), root-mean-square-error (RMSE), and mean absolute error (MAE), including the percentage RMSE and MAE. While ANN models are seen to outperform M5 Tree executed for all hydrological sites, the IIS variable selector was efficient in determining the appropriate predictors, as stipulated by the better performance of the IIS coupled (ANN and M5 Tree) models relative to the models without IIS. When IIS-coupled models are integrated with MODWT, the wavelet-hybrid IIS-W-ANN and IIS-W-M5 Tree are seen to attain significantly accurate performance relative to their standalone counterparts. Importantly, IIS-W-ANN model accuracy outweighs IIS-ANN, as evidenced by a larger r and WI (by 7.5% and 3.8%, respectively) and a lower RMSE (by 21.3%). In comparison to the IIS-W-M5 Tree model, IIS-W-ANN model yielded larger values of WI = 0.936-0.979 and ENS = 0.770-0.920. Correspondingly, the errors (RMSE and MAE) ranged from 0.162-0.487 m and 0.139-0.390 m, respectively, with relative errors, RRMSE = (15.65-21.00) % and MAPE = (14.79-20.78) %. Distinct geographic signature is evident where the most and least accurately forecasted streamflow data is attained for the Gwydir and Darling River, respectively. Conclusively, this study advocates the efficacy of iterative input selection, allowing the proper screening of model predictors, and subsequently, its integration with MODWT resulting in enhanced performance of the models applied in streamflow forecasting.
Optimized programming algorithm for cylindrical and directional deep brain stimulation electrodes.
Anderson, Daria Nesterovich; Osting, Braxton; Vorwerk, Johannes; Dorval, Alan D; Butson, Christopher R
2018-04-01
Deep brain stimulation (DBS) is a growing treatment option for movement and psychiatric disorders. As DBS technology moves toward directional leads with increased numbers of smaller electrode contacts, trial-and-error methods of manual DBS programming are becoming too time-consuming for clinical feasibility. We propose an algorithm to automate DBS programming in near real-time for a wide range of DBS lead designs. Magnetic resonance imaging and diffusion tensor imaging are used to build finite element models that include anisotropic conductivity. The algorithm maximizes activation of target tissue and utilizes the Hessian matrix of the electric potential to approximate activation of neurons in all directions. We demonstrate our algorithm's ability in an example programming case that targets the subthalamic nucleus (STN) for the treatment of Parkinson's disease for three lead designs: the Medtronic 3389 (four cylindrical contacts), the direct STNAcute (two cylindrical contacts, six directional contacts), and the Medtronic-Sapiens lead (40 directional contacts). The optimization algorithm returns patient-specific contact configurations in near real-time-less than 10 s for even the most complex leads. When the lead was placed centrally in the target STN, the directional leads were able to activate over 50% of the region, whereas the Medtronic 3389 could activate only 40%. When the lead was placed 2 mm lateral to the target, the directional leads performed as well as they did in the central position, but the Medtronic 3389 activated only 2.9% of the STN. This DBS programming algorithm can be applied to cylindrical electrodes as well as novel directional leads that are too complex with modern technology to be manually programmed. This algorithm may reduce clinical programming time and encourage the use of directional leads, since they activate a larger volume of the target area than cylindrical electrodes in central and off-target lead placements.
A Car Transportation System in Cooperation by Multiple Mobile Robots for Each Wheel: iCART II
NASA Astrophysics Data System (ADS)
Kashiwazaki, Koshi; Yonezawa, Naoaki; Kosuge, Kazuhiro; Sugahara, Yusuke; Hirata, Yasuhisa; Endo, Mitsuru; Kanbayashi, Takashi; Shinozuka, Hiroyuki; Suzuki, Koki; Ono, Yuki
The authors proposed a car transportation system, iCART (intelligent Cooperative Autonomous Robot Transporters), for automation of mechanical parking systems by two mobile robots. However, it was difficult to downsize the mobile robot because the length of it requires at least the wheelbase of a car. This paper proposes a new car transportation system, iCART II (iCART - type II), based on “a-robot-for-a-wheel” concept. A prototype system, MRWheel (a Mobile Robot for a Wheel), is designed and downsized less than half the conventional robot. First, a method for lifting up a wheel by MRWheel is described. In general, it is very difficult for mobile robots such as MRWheel to move to desired positions without motion errors caused by slipping, etc. Therefore, we propose a follower's motion error estimation algorithm based on the internal force applied to each follower by extending a conventional leader-follower type decentralized control algorithm for cooperative object transportation. The proposed algorithm enables followers to estimate their motion errors and enables the robots to transport a car to a desired position. In addition, we analyze and prove the stability and convergence of the resultant system with the proposed algorithm. In order to extract only the internal force from the force applied to each robot, we also propose a model-based external force compensation method. Finally, proposed methods are applied to the car transportation system, the experimental results confirm their validity.
Li, Zhao-Liang
2018-01-01
Few studies have examined hyperspectral remote-sensing image classification with type-II fuzzy sets. This paper addresses image classification based on a hyperspectral remote-sensing technique using an improved interval type-II fuzzy c-means (IT2FCM*) approach. In this study, in contrast to other traditional fuzzy c-means-based approaches, the IT2FCM* algorithm considers the ranking of interval numbers and the spectral uncertainty. The classification results based on a hyperspectral dataset using the FCM, IT2FCM, and the proposed improved IT2FCM* algorithms show that the IT2FCM* method plays the best performance according to the clustering accuracy. In this paper, in order to validate and demonstrate the separability of the IT2FCM*, four type-I fuzzy validity indexes are employed, and a comparative analysis of these fuzzy validity indexes also applied in FCM and IT2FCM methods are made. These four indexes are also applied into different spatial and spectral resolution datasets to analyze the effects of spectral and spatial scaling factors on the separability of FCM, IT2FCM, and IT2FCM* methods. The results of these validity indexes from the hyperspectral datasets show that the improved IT2FCM* algorithm have the best values among these three algorithms in general. The results demonstrate that the IT2FCM* exhibits good performance in hyperspectral remote-sensing image classification because of its ability to handle hyperspectral uncertainty. PMID:29373548
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
On distribution reduction and algorithm implementation in inconsistent ordered information systems.
Zhang, Yanqin
2014-01-01
As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.
Finite pure integer programming algorithms employing only hyperspherically deduced cuts
NASA Technical Reports Server (NTRS)
Young, R. D.
1971-01-01
Three algorithms are developed that may be based exclusively on hyperspherically deduced cuts. The algorithms only apply, therefore, to problems structured so that these cuts are valid. The algorithms are shown to be finite.
Multi-objective optimisation and decision-making of space station logistics strategies
NASA Astrophysics Data System (ADS)
Zhu, Yue-he; Luo, Ya-zhong
2016-10-01
Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.
Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction
Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-01-01
We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu’s segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images. PMID:28515636
Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction.
Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-01-01
We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu's segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images.
ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amarasinghe, Saman
This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less
NASA Astrophysics Data System (ADS)
Hayrapetyan, David B.; Hovhannisyan, Levon; Mantashyan, Paytsar A.
2013-04-01
The analysis of complex spectra is an actual problem for modern science. The work is devoted to the creation of a software package, which analyzes spectrum in the different formats, possesses by dynamic knowledge database and self-study mechanism, performs automated analysis of the spectra compound based on knowledge database by application of certain algorithms. In the software package as searching systems, hyper-spherical random search algorithms, gradient algorithms and genetic searching algorithms were used. The analysis of Raman and IR spectrum of diamond-like carbon (DLC) samples were performed by elaborated program. After processing the data, the program immediately displays all the calculated parameters of DLC.
Star adaptation for two-algorithms used on serial computers
NASA Technical Reports Server (NTRS)
Howser, L. M.; Lambiotte, J. J., Jr.
1974-01-01
Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.
NASA Astrophysics Data System (ADS)
Abdulghafoor, O. B.; Shaat, M. M. R.; Ismail, M.; Nordin, R.; Yuwono, T.; Alwahedy, O. N. A.
2017-05-01
In this paper, the problem of resource allocation in OFDM-based downlink cognitive radio (CR) networks has been proposed. The purpose of this research is to decrease the computational complexity of the resource allocation algorithm for downlink CR network while concerning the interference constraint of primary network. The objective has been secured by adopting pricing scheme to develop power allocation algorithm with the following concerns: (i) reducing the complexity of the proposed algorithm and (ii) providing firm power control to the interference introduced to primary users (PUs). The performance of the proposed algorithm is tested for OFDM- CRNs. The simulation results show that the performance of the proposed algorithm approached the performance of the optimal algorithm at a lower computational complexity, i.e., O(NlogN), which makes the proposed algorithm suitable for more practical applications.
Data Based Prediction of Blood Glucose Concentrations Using Evolutionary Methods.
Hidalgo, J Ignacio; Colmenar, J Manuel; Kronberger, Gabriel; Winkler, Stephan M; Garnica, Oscar; Lanchares, Juan
2017-08-08
Predicting glucose values on the basis of insulin and food intakes is a difficult task that people with diabetes need to do daily. This is necessary as it is important to maintain glucose levels at appropriate values to avoid not only short-term, but also long-term complications of the illness. Artificial intelligence in general and machine learning techniques in particular have already lead to promising results in modeling and predicting glucose concentrations. In this work, several machine learning techniques are used for the modeling and prediction of glucose concentrations using as inputs the values measured by a continuous monitoring glucose system as well as also previous and estimated future carbohydrate intakes and insulin injections. In particular, we use the following four techniques: genetic programming, random forests, k-nearest neighbors, and grammatical evolution. We propose two new enhanced modeling algorithms for glucose prediction, namely (i) a variant of grammatical evolution which uses an optimized grammar, and (ii) a variant of tree-based genetic programming which uses a three-compartment model for carbohydrate and insulin dynamics. The predictors were trained and tested using data of ten patients from a public hospital in Spain. We analyze our experimental results using the Clarke error grid metric and see that 90% of the forecasts are correct (i.e., Clarke error categories A and B), but still even the best methods produce 5 to 10% of serious errors (category D) and approximately 0.5% of very serious errors (category E). We also propose an enhanced genetic programming algorithm that incorporates a three-compartment model into symbolic regression models to create smoothed time series of the original carbohydrate and insulin time series.
Open-cycle systems performance analysis programming guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olson, D.A.
1981-12-01
The Open-Cycle OTEC Systems Performance Analysis Program is an algorithm programmed on SERI's CDC Cyber 170/720 computer to predict the performance of a Claude-cycle, open-cycle OTEC plant. The algorithm models the Claude-cycle system as consisting of an evaporator, a turbine, a condenser, deaerators, a condenser gas exhaust, a cold water pipe and cold and warm seawater pumps. Each component is a separate subroutine in the main program. A description is given of how to write Fortran subroutines to fit into the main program for the components of the OTEC plant. An explanation is provided of how to use the algorithm.more » The main program and existing component subroutines are described. Appropriate common blocks and input and output variables are listed. Preprogrammed thermodynamic property functions for steam, fresh water, and seawater are described.« less
Algorithms and programming tools for image processing on the MPP, part 2
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1986-01-01
A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.
Development of Educational Support System for Algorithm using Flowchart
NASA Astrophysics Data System (ADS)
Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta
Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.
NASA Technical Reports Server (NTRS)
Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.
1990-01-01
The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.
Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai
2009-01-01
Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Vanrosendale, John
1989-01-01
Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.
Image Processing Algorithms in the Secondary School Programming Education
ERIC Educational Resources Information Center
Gerják, István
2017-01-01
Learning computer programming for students of the age of 14-18 is difficult and requires endurance and engagement. Being familiar with the syntax of a computer language and writing programs in it are challenges for youngsters, not to mention that understanding algorithms is also a big challenge. To help students in the learning process, teachers…
Improved algorithm for calculating the Chandrasekhar function
NASA Astrophysics Data System (ADS)
Jablonski, A.
2013-02-01
Theoretical models of electron transport in condensed matter require an effective source of the Chandrasekhar H(x,omega) function. A code providing the H(x,omega) function has to be both accurate and very fast. The current revision of the code published earlier [A. Jablonski, Comput. Phys. Commun. 183 (2012) 1773] decreased the running time, averaged over different pairs of arguments x and omega, by a factor of more than 20. The decrease of the running time in the range of small values of the argument x, less than 0.05, is even more pronounced, reaching a factor of 30. The accuracy of the current code is not affected, and is typically better than 12 decimal places. New version program summaryProgram title: CHANDRAS_v2 Catalogue identifier: AEMC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 976 No. of bytes in distributed program, including test data, etc.: 11416 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows 7, Windows XP, Unix/Linux RAM: 0.7 MB Classification: 2.4, 7.2 Catalogue identifier of previous version: AEMC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 1773 Does the new version supersede the old program: Yes Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two algorithms by selecting ranges of the argument omega in which the performance is the fastest. Reasons for the new version: Some of the theoretical models describing electron transport in condensed matter need a source of the Chandrasekhar H function values with an accuracy of at least 10 decimal places. Additionally, calculations of this function should be as fast as possible since frequent calls to a subroutine providing this function are made (e.g., numerical evaluation of a double integral with a complicated integrand containing the H function). Both conditions were satisfied in the algorithm previously published [1]. However, it has been found that a proper selection of the quadrature in an integral representation of the Chandrasekhar function may considerably decrease the running time. By suitable selection of the number of abscissas in Gauss-Legendre quadrature, the execution time was decreased by a factor of more than 20. Simultaneously, the accuracy of results has not been affected. Summary of revisions: (1) As in previous work [1], two integral representations of the Chandrasekhar function, H(x,omega), were considered: the expression published by Dudarev and Whelan [2] and the expression published by Davidović et al. [3]. The algorithms implementing these representations were designated A and B, respectively. All integrals in these implementations were previously calculated using Romberg quadrature. It has been found, however, that the use of Gauss-Legendre quadrature considerably improved the performance of both algorithms. Two conditions have to be satisfied. (i) The number of abscissas, N, has to be rather large, and (ii) the abscissas and corresponding weights should be determined with accuracy as high as possible. The abscissas and weights are available for N=16, 20, 24, 32, 40, 48, 64, 80, and 96 with accuracy of 20 decimal places [4], and all these values were introduced into a new procedure GAUSS replacing procedure ROMBERG. Due to the fact that the implemented tables are rather extensive, they were recalculated using the Rybicki algorithm (Ref. [5], pp. 183-184) and rechecked. No errors or misprints were found. (2) In the integral representation of the H function derived by Davidović et al. [3], the positive root ν0 of the so-called dispersion function needs to be calculated with accuracy of at least 10 decimal places (see. Ref [6], pp. 61-64 and Ref. [1], Eqs. (5) and (29)). For small values of the argument omega and values of omega close to unity, the nonlinear equation in one unknown, ν0, can be solved analytically. New simple analytical expressions were derived here that can be efficiently used in calculations of the root. (3) The above modifications of the code considerably decreased the time of calculation of both algorithms A and B. The results are summarized in Fig. 1. The time of calculations is in fact the CPU time in microseconds for a computer equipped with an Inter Xeon processor (3.46 GHz) using Lahey-Fujitsu Fortran v. 7.2. Time of calculations of the H(x,omega) function averaged over different pairs of arguments x and omega. (a) 400 pairs uniformly distributed in the ranges 0<=x<=0.05 and 0<=omega<=1; (b) 400 pairs uniformly distributed in the ranges 0.05<=x<=1 and 0<=omega<=1. The shortest execution time averaged over values of the argument x exceeding 0.05 has been observed for algorithm B and Gauss-Legendre quadrature with the number of abscissas equal to 64 (23.2 μs). As compared with Romberg quadrature, the execution time was shortened by a factor of 22.5. For small x values, below 0.05, both algorithms A and B are considerably faster if Gauss-Legendre quadrature is used. For N=64, the average time of execution of algorithm B is decreased with respect to Romberg quadrature by a factor close to 30. However, in that range of argument x, algorithm A exhibits much faster performance. Furthermore, the average execution time of algorithm A, equal to about 100 μs, is practically independent of the number of abscissas N. (4) For Romberg quadrature, to optimize the performance, the mixed algorithm C was proposed in which algorithm A is used for argument x smaller than or equal to x0=0.4, while algorithm B is used for x larger than 0.4 [1]. For Gauss-Legendre quadrature, the limit x0 was found to depend on the number of abscissas N. For each value of N considered, the time of calculations of the H function was determined for pairs of arguments uniformly distributed in the ranges 0<=x<=0.05 and 0<=omega<=1, and for pairs of arguments uniformly distributed in the ranges 0.05<=x<=1 and 0<=omega<=1. As shown in Fig. 2 for N=64, algorithm A is faster than algorithm B for x smaller than or equal to 0.0225. Comparison of the running times of algorithms A and B. Open circles: algorithm B is faster than the algorithm A; full circles: algorithm A is faster than algorithm B. Thus, the value of x0=0.0225 is proposed for the mixed algorithm C when Gauss-Legendere quadrature with N=64 is used. Similar computer experiments performed for other values of N are summarized below. L N0 1 16 0.25 2 20 0.15 3 24 0.10 4 32 0.050 5 40 0.030 6 48 0.045 7 64 0.0225-Recommended 8 80 0.0125 9 96 0.020 The flag L is one of the input parameters for the subroutine GAUSS. In the programs implementing algorithms A, B, and C (CHANDRA, CHANDRB, and CHANDRC), Gauss-Legendre quadrature with N=64 is currently set. As follows from Fig. 1, algorithm B (and consequently algorithm C) is the fastest in that case. It is still possible to change the number of abscissas; the flag L then has to be modified in lines 165, 169, 185, 189, and 304 of program CHANDRAS_v2, and the value of x0 in line 111 has to be adjusted according to the table above. (5) The above modifications of the code did not affect the accuracy of the calculated Chandrasekhar function, as compared to the original code [1]. For the pairs of arguments shown in Fig. 2, the accuracy of the H function, calculated from algorithms A and B, reached at least 12 decimal digits; however, in the majority of cases, the accuracy is equal to 13 decimal digits. Restrictions: Two input parameters for the Chandrasekhar function, x and omega, are restricted to the ranges 0<=x<=1 and 0<=omega<=1, which is sufficient in numerous applications. Running time: between 15 and 100 μs for one pair of arguments of the Chandrasekhar function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe, Katsunori; Kohyama, Akira; Tanaka, Satoru
This report describes an outline of the activities of the JUPITER-II collaboration (japan-USA program of Irradiation/Integration test for Fusion Research-II), Which has bee carried out through six years (2001-2006) under Phase 4 of the collabroation implemented by Amendment 4 of Annex 1 to the DOE (United States Department of Energy)-MEXT (Ministry of Education ,Culture,Sports,Science and Technology) Cooperation. This program followed the RTNS-II Program (Phase1:1982-4986), the FFTF/MOTA Program (Phase2:1987-1994) and the JUPITER Program (Phase 3: 1995-2000) [1].
Ant Lion Optimization algorithm for kidney exchanges.
Hamouda, Eslam; El-Metwally, Sara; Tarek, Mayada
2018-01-01
The kidney exchange programs bring new insights in the field of organ transplantation. They make the previously not allowed surgery of incompatible patient-donor pairs easier to be performed on a large scale. Mathematically, the kidney exchange is an optimization problem for the number of possible exchanges among the incompatible pairs in a given pool. Also, the optimization modeling should consider the expected quality-adjusted life of transplant candidates and the shortage of computational and operational hospital resources. In this article, we introduce a bio-inspired stochastic-based Ant Lion Optimization, ALO, algorithm to the kidney exchange space to maximize the number of feasible cycles and chains among the pool pairs. Ant Lion Optimizer-based program achieves comparable kidney exchange results to the deterministic-based approaches like integer programming. Also, ALO outperforms other stochastic-based methods such as Genetic Algorithm in terms of the efficient usage of computational resources and the quantity of resulting exchanges. Ant Lion Optimization algorithm can be adopted easily for on-line exchanges and the integration of weights for hard-to-match patients, which will improve the future decisions of kidney exchange programs. A reference implementation for ALO algorithm for kidney exchanges is written in MATLAB and is GPL licensed. It is available as free open-source software from: https://github.com/SaraEl-Metwally/ALO_algorithm_for_Kidney_Exchanges.
Optimization algorithms for large-scale multireservoir hydropower systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiew, K.L.
Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another.more » The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.« less
Algorithms and programming tools for image processing on the MPP:3
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1987-01-01
This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.
Basics of identification measurement technology
NASA Astrophysics Data System (ADS)
Klikushin, Yu N.; Kobenko, V. Yu; Stepanov, P. P.
2018-01-01
All available algorithms and suitable for pattern recognition do not give 100% guarantee, therefore there is a field of scientific night activity in this direction, studies are relevant. It is proposed to develop existing technologies for pattern recognition in the form of application of identification measurements. The purpose of the study is to identify the possibility of recognizing images using identification measurement technologies. In solving problems of pattern recognition, neural networks and hidden Markov models are mainly used. A fundamentally new approach to the solution of problems of pattern recognition based on the technology of identification signal measurements (IIS) is proposed. The essence of IIS technology is the quantitative evaluation of the shape of images using special tools and algorithms.
A Low Cost Matching Motion Estimation Sensor Based on the NIOS II Microprocessor
González, Diego; Botella, Guillermo; Meyer-Baese, Uwe; García, Carlos; Sanz, Concepción; Prieto-Matías, Manuel; Tirado, Francisco
2012-01-01
This work presents the implementation of a matching-based motion estimation sensor on a Field Programmable Gate Array (FPGA) and NIOS II microprocessor applying a C to Hardware (C2H) acceleration paradigm. The design, which involves several matching algorithms, is mapped using Very Large Scale Integration (VLSI) technology. These algorithms, as well as the hardware implementation, are presented here together with an extensive analysis of the resources needed and the throughput obtained. The developed low-cost system is practical for real-time throughput and reduced power consumption and is useful in robotic applications, such as tracking, navigation using an unmanned vehicle, or as part of a more complex system. PMID:23201989
Introducing difference recurrence relations for faster semi-global alignment of long sequences.
Suzuki, Hajime; Kasahara, Masahiro
2018-02-19
The read length of single-molecule DNA sequencers is reaching 1 Mb. Popular alignment software tools widely used for analyzing such long reads often take advantage of single-instruction multiple-data (SIMD) operations to accelerate calculation of dynamic programming (DP) matrices in the Smith-Waterman-Gotoh (SWG) algorithm with a fixed alignment start position at the origin. Nonetheless, 16-bit or 32-bit integers are necessary for storing the values in a DP matrix when sequences to be aligned are long; this situation hampers the use of the full SIMD width of modern processors. We proposed a faster semi-global alignment algorithm, "difference recurrence relations," that runs more rapidly than the state-of-the-art algorithm by a factor of 2.1. Instead of calculating and storing all the values in a DP matrix directly, our algorithm computes and stores mainly the differences between the values of adjacent cells in the matrix. Although the SWG algorithm and our algorithm can output exactly the same result, our algorithm mainly involves 8-bit integer operations, enabling us to exploit the full width of SIMD operations (e.g., 32) on modern processors. We also developed a library, libgaba, so that developers can easily integrate our algorithm into alignment programs. Our novel algorithm and optimized library implementation will facilitate accelerating nucleotide long-read analysis algorithms that use pairwise alignment stages. The library is implemented in the C programming language and available at https://github.com/ocxtal/libgaba .
Sizing of complex structure by the integration of several different optimal design algorithms
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.
1974-01-01
Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.
KB3D Reference Manual. Version 1.a
NASA Technical Reports Server (NTRS)
Munoz, Cesar; Siminiceanu, Radu; Carreno, Victor A.; Dowek, Gilles
2005-01-01
This paper is a reference manual describing the implementation of the KB3D conflict detection and resolution algorithm. The algorithm has been implemented in the Java and C++ programming languages. The reference manual gives a short overview of the detection and resolution functions, the structural implementation of the program, inputs and outputs to the program, and describes how the program is used. Inputs to the program can be rectangular coordinates or geodesic coordinates. The reference manual also gives examples of conflict scenarios and the resolution outputs the program produces.
40 CFR 147.650 - State-administrative program-Class I, II, III, IV, and V wells.
Code of Federal Regulations, 2010 CFR
2010-07-01
... CONTROL PROGRAMS Idaho § 147.650 State-administrative program—Class I, II, III, IV, and V wells. The UIC program for Class I, II, III, IV, and V wells in the State of Idaho, other than those on Indian lands, is the program administered by the Idaho Department of Water Resources, approved by EPA pursuant to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frenkel, G.; Paterson, T.S.; Smith, M.E.
The Institute for Defense Analyses (IDA) has collected and analyzed information on battle management algorithm technology that is relevant to Battle Management/Command, Control and Communications (BM/C3). This Memorandum Report represents a program plan that will provide the BM/C3 Directorate of the Strategic Defense Initiative Organization (SDIO) with administrative and technical insight into algorithm technology. This program plan focuses on current activity in algorithm development and provides information and analysis to the SDIO to be used in formulating budget requirements for FY 1988 and beyond. Based upon analysis of algorithm requirements and ongoing programs, recommendations have been made for research areasmore » that should be pursued, including both the continuation of current work and the initiation of new tasks. This final report includes all relevant material from interim reports as well as new results.« less
On the performance of explicit and implicit algorithms for transient thermal analysis
NASA Astrophysics Data System (ADS)
Adelman, H. M.; Haftka, R. T.
1980-09-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.
Optimization methods for decision making in disease prevention and epidemic control.
Deng, Yan; Shen, Siqian; Vorobeychik, Yevgeniy
2013-11-01
This paper investigates problems of disease prevention and epidemic control (DPEC), in which we optimize two sets of decisions: (i) vaccinating individuals and (ii) closing locations, given respective budgets with the goal of minimizing the expected number of infected individuals after intervention. The spread of diseases is inherently stochastic due to the uncertainty about disease transmission and human interaction. We use a bipartite graph to represent individuals' propensities of visiting a set of location, and formulate two integer nonlinear programming models to optimize choices of individuals to vaccinate and locations to close. Our first model assumes that if a location is closed, its visitors stay in a safe location and will not visit other locations. Our second model incorporates compensatory behavior by assuming multiple behavioral groups, always visiting the most preferred locations that remain open. The paper develops algorithms based on a greedy strategy, dynamic programming, and integer programming, and compares the computational efficacy and solution quality. We test problem instances derived from daily behavior patterns of 100 randomly chosen individuals (corresponding to 195 locations) in Portland, Oregon, and provide policy insights regarding the use of the two DPEC models. Copyright © 2013 Elsevier Inc. All rights reserved.
Case-mix groups for VA hospital-based home care.
Smith, M E; Baker, C R; Branch, L G; Walls, R C; Grimes, R M; Karklins, J M; Kashner, M; Burrage, R; Parks, A; Rogers, P
1992-01-01
The purpose of this study is to group hospital-based home care (HBHC) patients homogeneously by their characteristics with respect to cost of care to develop alternative case mix methods for management and reimbursement (allocation) purposes. Six Veterans Affairs (VA) HBHC programs in Fiscal Year (FY) 1986 that maximized patient, program, and regional variation were selected, all of which agreed to participate. All HBHC patients active in each program on October 1, 1987, in addition to all new admissions through September 30, 1988 (FY88), comprised the sample of 874 unique patients. Statistical methods include the use of classification and regression trees (CART software: Statistical Software; Lafayette, CA), analysis of variance, and multiple linear regression techniques. The resulting algorithm is a three-factor model that explains 20% of the cost variance (R2 = 20%, with a cross validation R2 of 12%). Similar classifications such as the RUG-II, which is utilized for VA nursing home and intermediate care, the VA outpatient resource allocation model, and the RUG-HHC, utilized in some states for reimbursing home health care in the private sector, explained less of the cost variance and, therefore, are less adequate for VA home care resource allocation.
Application of Sequential Quadratic Programming to Minimize Smart Active Flap Rotor Hub Loads
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Leyland, Jane
2014-01-01
In an analytical study, SMART active flap rotor hub loads have been minimized using nonlinear programming constrained optimization methodology. The recently developed NLPQLP system (Schittkowski, 2010) that employs Sequential Quadratic Programming (SQP) as its core algorithm was embedded into a driver code (NLP10x10) specifically designed to minimize active flap rotor hub loads (Leyland, 2014). Three types of practical constraints on the flap deflections have been considered. To validate the current application, two other optimization methods have been used: i) the standard, linear unconstrained method, and ii) the nonlinear Generalized Reduced Gradient (GRG) method with constraints. The new software code NLP10x10 has been systematically checked out. It has been verified that NLP10x10 is functioning as desired. The following are briefly covered in this paper: relevant optimization theory; implementation of the capability of minimizing a metric of all, or a subset, of the hub loads as well as the capability of using all, or a subset, of the flap harmonics; and finally, solutions for the SMART rotor. The eventual goal is to implement NLP10x10 in a real-time wind tunnel environment.
ERIC Educational Resources Information Center
Haberman, Shelby J.
2013-01-01
A general program for item-response analysis is described that uses the stabilized Newton-Raphson algorithm. This program is written to be compliant with Fortran 2003 standards and is sufficiently general to handle independent variables, multidimensional ability parameters, and matrix sampling. The ability variables may be either polytomous or…
Jung, Jaehoon; Yoon, Inhye; Paik, Joonki
2016-01-01
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978
Moghram, Basem Ameen; Nabil, Emad; Badr, Amr
2018-01-01
T-cell epitope structure identification is a significant challenging immunoinformatic problem within epitope-based vaccine design. Epitopes or antigenic peptides are a set of amino acids that bind with the Major Histocompatibility Complex (MHC) molecules. The aim of this process is presented by Antigen Presenting Cells to be inspected by T-cells. MHC-molecule-binding epitopes are responsible for triggering the immune response to antigens. The epitope's three-dimensional (3D) molecular structure (i.e., tertiary structure) reflects its proper function. Therefore, the identification of MHC class-II epitopes structure is a significant step towards epitope-based vaccine design and understanding of the immune system. In this paper, we propose a new technique using a Genetic Algorithm for Predicting the Epitope Structure (GAPES), to predict the structure of MHC class-II epitopes based on their sequence. The proposed Elitist-based genetic algorithm for predicting the epitope's tertiary structure is based on Ab-Initio Empirical Conformational Energy Program for Peptides (ECEPP) Force Field Model. The developed secondary structure prediction technique relies on Ramachandran Plot. We used two alignment algorithms: the ROSS alignment and TM-Score alignment. We applied four different alignment approaches to calculate the similarity scores of the dataset under test. We utilized the support vector machine (SVM) classifier as an evaluation of the prediction performance. The prediction accuracy and the Area Under Receiver Operating Characteristic (ROC) Curve (AUC) were calculated as measures of performance. The calculations are performed on twelve similarity-reduced datasets of the Immune Epitope Data Base (IEDB) and a large dataset of peptide-binding affinities to HLA-DRB1*0101. The results showed that GAPES was reliable and very accurate. We achieved an average prediction accuracy of 93.50% and an average AUC of 0.974 in the IEDB dataset. Also, we achieved an accuracy of 95.125% and an AUC of 0.987 on the HLA-DRB1*0101 allele of the Wang benchmark dataset. The results indicate that the proposed prediction technique "GAPES" is a promising technique that will help researchers and scientists to predict the protein structure and it will assist them in the intelligent design of new epitope-based vaccines. Copyright © 2017 Elsevier B.V. All rights reserved.
CRITIC2: A program for real-space analysis of quantum chemical interactions in solids
NASA Astrophysics Data System (ADS)
Otero-de-la-Roza, A.; Johnson, Erin R.; Luaña, Víctor
2014-03-01
We present CRITIC2, a program for the analysis of quantum-mechanical atomic and molecular interactions in periodic solids. This code, a greatly improved version of the previous CRITIC program (Otero-de-la Roza et al., 2009), can: (i) find critical points of the electron density and related scalar fields such as the electron localization function (ELF), Laplacian, … (ii) integrate atomic properties in the framework of Bader’s Atoms-in-Molecules theory (QTAIM), (iii) visualize non-covalent interactions in crystals using the non-covalent interactions (NCI) index, (iv) generate relevant graphical representations including lines, planes, gradient paths, contour plots, atomic basins, … and (v) perform transformations between file formats describing scalar fields and crystal structures. CRITIC2 can interface with the output produced by a variety of electronic structure programs including WIEN2k, elk, PI, abinit, Quantum ESPRESSO, VASP, Gaussian, and, in general, any other code capable of writing the scalar field under study to a three-dimensional grid. CRITIC2 is parallelized, completely documented (including illustrative test cases) and publicly available under the GNU General Public License. Catalogue identifier: AECB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECB_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 11686949 No. of bytes in distributed program, including test data, etc.: 337020731 Distribution format: tar.gz Programming language: Fortran 77 and 90. Computer: Workstations. Operating system: Unix, GNU/Linux. Has the code been vectorized or parallelized?: Shared-memory parallelization can be used for most tasks. Classification: 7.3. Catalogue identifier of previous version: AECB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 157 Nature of problem: Analysis of quantum-chemical interactions in periodic solids by means of atoms-in-molecules and related formalisms. Solution method: Critical point search using Newton’s algorithm, atomic basin integration using bisection, qtree and grid-based algorithms, diverse graphical representations and computation of the non-covalent interactions index on a three-dimensional grid. Additional comments: !!!!! The distribution file for this program is over 330 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Variable, depending on the crystal and the source of the underlying scalar field.
Robust Constrained Blackbox Optimization with Surrogates
2015-05-21
algorithms with OPAL . Mathematical Programming Computation, 6(3):233–254, 2014. 6. M.S. Ouali, H. Aoudjit, and C. Audet. Replacement scheduling of a fleet of...Orban. Optimization of Algorithms with OPAL . Mathematical Programming Computation, 6(3), 233-254, September 2014. DISTRIBUTION A: Distribution
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
VAXELN Experimentation: Programming a Real-Time Periodic Task Dispatcher Using VAXELN Ada 1.1
1987-11-01
synchronization to the SQM and VAXELN semaphores. Based on real-time scheduling theory, the optimal rate-monotonic scheduling algorithm [Lui 73...schedulability test based on the rate-monotonic algorithm , namely task-lumping [Sha 871, was necessary to cal- culate the theoretically expected schedulability...8217 Guide Digital Equipment Corporation, Maynard, MA, 1986. [Lui 73] Liu, C.L., Layland, J.W. Scheduling Algorithms for Multi-programming in a Hard-Real-Time
Operational Planning for Multiple Heterogeneous Unmanned Aerial Vehicles in Three Dimensions
2009-06-01
human input in the planning process. Two solution methods are presented: (1) a mixed-integer program, and (2) an algorithm that utilizes a metaheuristic ...and (2) an algorithm that utilizes a metaheuristic to generate composite variables for a linear program, called the Composite Operations Planning...that represent a path and an associated type of UAV. The reformulation is incorporated into an algorithm that uses a metaheuristic to generate the
Oligo Design: a computer program for development of probes for oligonucleotide microarrays.
Herold, Keith E; Rasooly, Avraham
2003-12-01
Oligonucleotide microarrays have demonstrated potential for the analysis of gene expression, genotyping, and mutational analysis. Our work focuses primarily on the detection and identification of bacteria based on known short sequences of DNA. Oligo Design, the software described here, automates several design aspects that enable the improved selection of oligonucleotides for use with microarrays for these applications. Two major features of the program are: (i) a tiling algorithm for the design of short overlapping temperature-matched oligonucleotides of variable length, which are useful for the analysis of single nucleotide polymorphisms and (ii) a set of tools for the analysis of multiple alignments of gene families and related short DNA sequences, which allow for the identification of conserved DNA sequences for PCR primer selection and variable DNA sequences for the selection of unique probes for identification. Note that the program does not address the full genome perspective but, instead, is focused on the genetic analysis of short segments of DNA. The program is Internet-enabled and includes a built-in browser and the automated ability to download sequences from GenBank by specifying the GI number. The program also includes several utilities, including audio recital of a DNA sequence (useful for verifying sequences against a written document), a random sequence generator that provides insight into the relationship between melting temperature and GC content, and a PCR calculator.
AutoBayes Program Synthesis System System Internals
NASA Technical Reports Server (NTRS)
Schumann, Johann Martin
2011-01-01
This lecture combines the theoretical background of schema based program synthesis with the hands-on study of a powerful, open-source program synthesis system (Auto-Bayes). Schema-based program synthesis is a popular approach toward program synthesis. The lecture will provide an introduction into this topic and discuss how this technology can be used to generate customized algorithms. The synthesis of advanced numerical algorithms requires the availability of a powerful symbolic (algebra) system. Its task is to symbolically solve equations, simplify expressions, or to symbolically calculate derivatives (among others) such that the synthesized algorithms become as efficient as possible. We will discuss the use and importance of the symbolic system for synthesis. Any synthesis system is a large and complex piece of code. In this lecture, we will study Autobayes in detail. AutoBayes has been developed at NASA Ames and has been made open source. It takes a compact statistical specification and generates a customized data analysis algorithm (in C/C++) from it. AutoBayes is written in SWI Prolog and many concepts from rewriting, logic, functional, and symbolic programming. We will discuss the system architecture, the schema libary and the extensive support infra-structure. Practical hands-on experiments and exercises will enable the student to get insight into a realistic program synthesis system and provides knowledge to use, modify, and extend Autobayes.
Administrative Plans. STIP II (Skill Training Improvement Programs Round II).
ERIC Educational Resources Information Center
Los Angeles Community Coll. District, CA.
Personnel policies, job responsibilities, and accounting procedures are summarized for the Los Angeles Community College District's Skill Training Improvement Programs (STIP II). This report first cites references to the established personnel and affirmative action procedures governing the program and then presents an organizational chart for the…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-01
... DEPARTMENT OF EDUCATION Implementation of Title I/II Program Initiatives; Extension of Public Comment Period; Correction AGENCY: Department of Education. ACTION: Correction notice. SUMMARY: On October... Title I/II Program Initiatives,'' Docket ID ED- 2013-ICCD-0090. The comment period for this information...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-11
...; Comment Request; Implementation of Title I/II Program Initiatives AGENCY: Institute of Educational... note that written comments received in response to this notice will be considered public records. Title of Collection: Implementation of Title I/II Program Initiatives. OMB Control Number: 1850-New. Type...
The Design and Implementation of a Read Prediction Buffer
1992-12-01
City, State, and ZIP Code) 7b ADDRESS (City, State. and ZIP Code) 8a. NAME OF FUNDING /SPONSORING 8b. OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT... 9 E. THESIS STRUCTURE.. . .... ............... 9 II. READ PREDICTION ALGORITHM AND BUFFER DESIGN 10 A. THE READ PREDICTION ALGORITHM...29 Figure 9 . Basic Multiplexer Cell .... .......... .. 30 Figure 10. Block Diagram Simulation Labels ......... 38 viii I. INTRODUCTION A
An algorithm for the solution of dynamic linear programs
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1989-01-01
The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation scheme.
Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.
2011-01-01
Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.
Array distribution in data-parallel programs
NASA Technical Reports Server (NTRS)
Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert; Sheffler, Thomas J.
1994-01-01
We consider distribution at compile time of the array data in a distributed-memory implementation of a data-parallel program written in a language like Fortran 90. We allow dynamic redistribution of data and define a heuristic algorithmic framework that chooses distribution parameters to minimize an estimate of program completion time. We represent the program as an alignment-distribution graph. We propose a divide-and-conquer algorithm for distribution that initially assigns a common distribution to each node of the graph and successively refines this assignment, taking computation, realignment, and redistribution costs into account. We explain how to estimate the effect of distribution on computation cost and how to choose a candidate set of distributions. We present the results of an implementation of our algorithms on several test problems.
Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul
2016-09-09
Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.
ERIC Educational Resources Information Center
Office of Education (DHEW), Washington, DC.
Narrative reports submitted by individual State Departments of Education relating to the operation of their respective Title II Elementary and Secondary Education Act (ESEA) programs are synthesized in this document. Information is provided about six aspects of the programs: 1) state management of ESEA Title II programs; 2) program development; 3)…
Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo
2012-12-01
The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.
Solving Fractional Programming Problems based on Swarm Intelligence
NASA Astrophysics Data System (ADS)
Raouf, Osama Abdel; Hezam, Ibrahim M.
2014-04-01
This paper presents a new approach to solve Fractional Programming Problems (FPPs) based on two different Swarm Intelligence (SI) algorithms. The two algorithms are: Particle Swarm Optimization, and Firefly Algorithm. The two algorithms are tested using several FPP benchmark examples and two selected industrial applications. The test aims to prove the capability of the SI algorithms to solve any type of FPPs. The solution results employing the SI algorithms are compared with a number of exact and metaheuristic solution methods used for handling FPPs. Swarm Intelligence can be denoted as an effective technique for solving linear or nonlinear, non-differentiable fractional objective functions. Problems with an optimal solution at a finite point and an unbounded constraint set, can be solved using the proposed approach. Numerical examples are given to show the feasibility, effectiveness, and robustness of the proposed algorithm. The results obtained using the two SI algorithms revealed the superiority of the proposed technique among others in computational time. A better accuracy was remarkably observed in the solution results of the industrial application problems.
Space shuttle propulsion estimation development verification, volume 1
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The results of the Propulsion Estimation Development Verification are summarized. A computer program developed under a previous contract (NAS8-35324) was modified to include improved models for the Solid Rocket Booster (SRB) internal ballistics, the Space Shuttle Main Engine (SSME) power coefficient model, the vehicle dynamics using quaternions, and an improved Kalman filter algorithm based on the U-D factorized algorithm. As additional output, the estimated propulsion performances, for each device are computed with the associated 1-sigma bounds. The outputs of the estimation program are provided in graphical plots. An additional effort was expended to examine the use of the estimation approach to evaluate single engine test data. In addition to the propulsion estimation program PFILTER, a program was developed to produce a best estimate of trajectory (BET). The program LFILTER, also uses the U-D factorized algorithm form of the Kalman filter as in the propulsion estimation program PFILTER. The necessary definitions and equations explaining the Kalman filtering approach for the PFILTER program, the models used for this application for dynamics and measurements, program description, and program operation are presented.
Electrons and photons at High Level Trigger in CMS for Run II
NASA Astrophysics Data System (ADS)
Anuar, Afiq A.
2015-12-01
The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increase in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. New approaches have been studied to keep the HLT output rate manageable while maintaining thresholds low enough to cover physics analyses. The strategy mainly relies on porting online the ingredients that have been successfully applied in the offline reconstruction, thus allowing to move HLT selection closer to offline cuts. Improvements in HLT electron and photon definitions will be presented, focusing in particular on: updated clustering algorithm and the energy calibration procedure, new Particle-Flow-based isolation approach and pileup mitigation techniques, and the electron-dedicated track fitting algorithm based on Gaussian Sum Filter.
A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction
Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...
1995-01-01
In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less
NASA Technical Reports Server (NTRS)
Weeks, Cindy Lou
1986-01-01
Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.
ERIC Educational Resources Information Center
Végh, Ladislav
2016-01-01
The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…
Smart sensors II; Proceedings of the Seminar, San Diego, CA, July 31, August 1, 1980
NASA Astrophysics Data System (ADS)
Barbe, D. F.
1980-01-01
Topics discussed include technology for smart sensors, smart sensors for tracking and surveillance, and techniques and algorithms for smart sensors. Papers are presented on the application of very large scale integrated circuits to smart sensors, imaging charge-coupled devices for deep-space surveillance, ultra-precise star tracking using charge coupled devices, and automatic target identification of blurred images with super-resolution features. Attention is also given to smart sensors for terminal homing, algorithms for estimating image position, and the computational efficiency of multiple image registration algorithms.
On the improvement of blood sample collection at clinical laboratories
2014-01-01
Background Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. Methods A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. Results The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. Conclusions The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem. PMID:24406140
Interface of the general fitting tool GENFIT2 in PandaRoot
NASA Astrophysics Data System (ADS)
Prencipe, Elisabetta; Spataro, Stefano; Stockmanns, Tobias; PANDA Collaboration
2017-10-01
\\bar{{{P}}}ANDA is a planned experiment at FAIR (Darmstadt) with a cooled antiproton beam in a range [1.5; 15] GeV/c, allowing a wide physics program in nuclear and particle physics. It is the only experiment worldwide, which combines a solenoid field (B=2T) and a dipole field (B=2Tm) in a spectrometer with a fixed target topology, in that energy regime. The tracking system of \\bar{{{P}}}ANDA involves the presence of a high performance silicon vertex detector, a GEM detector, a straw-tubes central tracker, a forward tracking system, and a luminosity monitor. The offline tracking algorithm is developed within the PandaRoot framework, which is a part of the FairRoot project. The tool here presented is based on algorithms containing the Kalman Filter equations and a deterministic annealing filter. This general fitting tool (GENFIT2) offers to users also a Runge-Kutta track representation, and interfaces with Millepede II (useful for alignment) and RAVE (vertex finder). It is independent on the detector geometry and the magnetic field map, and written in C++ object-oriented modular code. Several fitting algorithms are available with GENFIT2, with user-adjustable parameters; therefore the tool is of friendly usage. A check on the fit convergence is done by GENFIT2 as well. The Kalman-Filter-based algorithms have a wide range of applications; among those in particle physics they can perform extrapolations of track parameters and covariance matrices. The adoptions of the PandaRoot framework to connect to Genfit2 are described, and the impact of GENFIT2 on the physics simulations of \\bar{{{P}}}ANDA are shown: significant improvement is reported for those channels where a good low momentum tracking is required (pT < 400 MeV/c).
NASA Technical Reports Server (NTRS)
James, Mark Anthony
1999-01-01
A finite element program has been developed to perform quasi-static, elastic-plastic crack growth simulations. The model provides a general framework for mixed-mode I/II elastic-plastic fracture analysis using small strain assumptions and plane stress, plane strain, and axisymmetric finite elements. Cracks are modeled explicitly in the mesh. As the cracks propagate, automatic remeshing algorithms delete the mesh local to the crack tip, extend the crack, and build a new mesh around the new tip. State variable mapping algorithms transfer stresses and displacements from the old mesh to the new mesh. The von Mises material model is implemented in the context of a non-linear Newton solution scheme. The fracture criterion is the critical crack tip opening displacement, and crack direction is predicted by the maximum tensile stress criterion at the crack tip. The implementation can accommodate multiple curving and interacting cracks. An additional fracture algorithm based on nodal release can be used to simulate fracture along a horizontal plane of symmetry. A core of plane strain elements can be used with the nodal release algorithm to simulate the triaxial state of stress near the crack tip. Verification and validation studies compare analysis results with experimental data and published three-dimensional analysis results. Fracture predictions using nodal release for compact tension, middle-crack tension, and multi-site damage test specimens produced accurate results for residual strength and link-up loads. Curving crack predictions using remeshing/mapping were compared with experimental data for an Arcan mixed-mode specimen. Loading angles from 0 degrees to 90 degrees were analyzed. The maximum tensile stress criterion was able to predict the crack direction and path for all loading angles in which the material failed in tension. Residual strength was also accurately predicted for these cases.
High-Speed Digital Scan Converter for High-Frequency Ultrasound Sector Scanners
Chang, Jin Ho; Yen, Jesse T.; Shung, K. Kirk
2008-01-01
This paper presents a high-speed digital scan converter (DSC) capable of providing more than 400 images per second, which is necessary to examine the activities of the mouse heart whose rate is 5–10 beats per second. To achieve the desired high-speed performance in cost-effective manner, the DSC developed adopts a linear interpolation algorithm in which two nearest samples to each object pixel of a monitor are selected and only angular interpolation is performed. Through computer simulation with the Field II program, its accuracy was investigated by comparing it to that of bilinear interpolation known as the best algorithm in terms of accuracy and processing speed. The simulation results show that the linear interpolation algorithm is capable of providing an acceptable image quality, which means that the difference of the root mean square error (RMSE) values of the linear and bilinear interpolation algorithms is below 1 %, if the sample rate of the envelope samples is at least four times higher than the Nyquist rate for the baseband component of echo signals. The designed DSC was implemented with a single FPGA (Stratix EP1S60F1020C6, Altera Corporation, San Jose, CA) on a DSC board that is a part of a high-speed ultrasound imaging system developed. The temporal and spatial resolutions of the implemented DSC were evaluated by examining its maximum processing time with a time stamp indicating when an image is completely formed and wire phantom testing, respectively. The experimental results show that the implemented DSC is capable of providing images at the rate of 400 images per second with negligible processing error. PMID:18430449
Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT
NASA Astrophysics Data System (ADS)
Ubaidulla, P.; Chockalingam, A.
2009-12-01
We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.
An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.
Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin
2016-12-01
Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.
specsim: A Fortran-77 program for conditional spectral simulation in 3D
NASA Astrophysics Data System (ADS)
Yao, Tingting
1998-12-01
A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.
Algorithmic Trading with Developmental and Linear Genetic Programming
NASA Astrophysics Data System (ADS)
Wilson, Garnett; Banzhaf, Wolfgang
A developmental co-evolutionary genetic programming approach (PAM DGP) and a standard linear genetic programming (LGP) stock trading systemare applied to a number of stocks across market sectors. Both GP techniques were found to be robust to market fluctuations and reactive to opportunities associated with stock price rise and fall, with PAMDGP generating notably greater profit in some stock trend scenarios. Both algorithms were very accurate at buying to achieve profit and selling to protect assets, while exhibiting bothmoderate trading activity and the ability to maximize or minimize investment as appropriate. The content of the trading rules produced by both algorithms are also examined in relation to stock price trend scenarios.
Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao
Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.
The program complex for vocal recognition
NASA Astrophysics Data System (ADS)
Konev, Anton; Kostyuchenko, Evgeny; Yakimuk, Alexey
2017-01-01
This article discusses the possibility of applying the algorithm of determining the pitch frequency for the note recognition problems. Preliminary study of programs-analogues were carried out for programs with function “recognition of the music”. The software package based on the algorithm for pitch frequency calculation was implemented and tested. It was shown that the algorithm allows recognizing the notes in the vocal performance of the user. A single musical instrument, a set of musical instruments, and a human voice humming a tune can be the sound source. The input file is initially presented in the .wav format or is recorded in this format from a microphone. Processing is performed by sequentially determining the pitch frequency and conversion of its values to the note. According to test results, modification of algorithms used in the complex was planned.
Automata-Based Verification of Temporal Properties on Running Programs
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)
2001-01-01
This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
The efficiency of geophysical adjoint codes generated by automatic differentiation tools
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Köhl, A.; Stammer, D.
2016-02-01
The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.
An Algorithm for Controlled Integration of Sound and Text.
ERIC Educational Resources Information Center
Wohlert, Harry S.; McCormick, Martin
1985-01-01
A serious drawback in introducing sound into computer programs for teaching foreign language speech has been the lack of an algorithm to turn off the cassette recorder immediately to keep screen text and audio in synchronization. This article describes a program which solves that problem. (SED)
Soil moisture and temperature algorithms and validation
USDA-ARS?s Scientific Manuscript database
Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...
Dynamic programming algorithms for biological sequence comparison.
Pearson, W R; Miller, W
1992-01-01
Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.
Multi-model data fusion to improve an early warning system for hypo-/hyperglycemic events.
Botwey, Ransford Henry; Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G
2014-01-01
Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.
Tracking at High Level Trigger in CMS
NASA Astrophysics Data System (ADS)
Tosi, M.
2016-04-01
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.
Computer simulation of a pilot in V/STOL aircraft control loops
NASA Technical Reports Server (NTRS)
Vogt, William G.; Mickle, Marlin H.; Zipf, Mark E.; Kucuk, Senol
1989-01-01
The objective was to develop a computerized adaptive pilot model for the computer model of the research aircraft, the Harrier II AV-8B V/STOL with special emphasis on propulsion control. In fact, two versions of the adaptive pilot are given. The first, simply called the Adaptive Control Model (ACM) of a pilot includes a parameter estimation algorithm for the parameters of the aircraft and an adaption scheme based on the root locus of the poles of the pilot controlled aircraft. The second, called the Optimal Control Model of the pilot (OCM), includes an adaption algorithm and an optimal control algorithm. These computer simulations were developed as a part of the ongoing research program in pilot model simulation supported by NASA Lewis from April 1, 1985 to August 30, 1986 under NASA Grant NAG 3-606 and from September 1, 1986 through November 30, 1988 under NASA Grant NAG 3-729. Once installed, these pilot models permitted the computer simulation of the pilot model to close all of the control loops normally closed by a pilot actually manipulating the control variables. The current version of this has permitted a baseline comparison of various qualitative and quantitative performance indices for propulsion control, the control loops and the work load on the pilot. Actual data for an aircraft flown by a human pilot furnished by NASA was compared to the outputs furnished by the computerized pilot and found to be favorable.
THE CHANDRA SURVEY OF THE COSMOS FIELD. II. SOURCE DETECTION AND PHOTOMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puccetti, S.; Vignali, C.; Cappelluti, N.
2009-12-01
The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that covers the central contiguous {approx}0.92 deg{sup 2} of the COSMOS field. C-COSMOS is the result of a complex tiling, with every position being observed in up to six overlapping pointings (four overlapping pointings in most of the central {approx}0.45 deg{sup 2} area with the best exposure, and two overlapping pointings in most of the surrounding area, covering an additional {approx}0.47 deg{sup 2}). Therefore, the full exploitation of the C-COSMOS data requires a dedicated and accurate analysis focused on three main issues: (1) maximizing the sensitivity when themore » point-spread function (PSF) changes strongly among different observations of the same source (from {approx}1 arcsec up to {approx}10 arcsec half-power radius); (2) resolving close pairs; and (3) obtaining the best source localization and count rate. We present here our treatment of four key analysis items: source detection, localization, photometry, and survey sensitivity. Our final procedure consists of a two step procedure: (1) a wavelet detection algorithm to find source candidates and (2) a maximum likelihood PSF fitting algorithm to evaluate the source count rates and the probability that each source candidate is a fluctuation of the background. We discuss the main characteristics of this procedure, which was the result of detailed comparisons between different detection algorithms and photometry tools, calibrated with extensive and dedicated simulations.« less
Cohomology of line bundles: Applications
NASA Astrophysics Data System (ADS)
Blumenhagen, Ralph; Jurke, Benjamin; Rahn, Thorsten; Roschy, Helmut
2012-01-01
Massless modes of both heterotic and Type II string compactifications on compact manifolds are determined by vector bundle valued cohomology classes. Various applications of our recent algorithm for the computation of line bundle valued cohomology classes over toric varieties are presented. For the heterotic string, the prime examples are so-called monad constructions on Calabi-Yau manifolds. In the context of Type II orientifolds, one often needs to compute cohomology for line bundles on finite group action coset spaces, necessitating us to generalize our algorithm to this case. Moreover, we exemplify that the different terms in Batyrev's formula and its generalizations can be given a one-to-one cohomological interpretation. Furthermore, we derive a combinatorial closed form expression for two Hodge numbers of a codimension two Calabi-Yau fourfold.
Describes the overall scope of the AEATF II program, demonstrates the need for additional human exposure monitoring data and explains the proposed methodology for the exposure monitoring studies proposed for conduct by the AEATF II.
Bellman's GAP--a language and compiler for dynamic programming in sequence analysis.
Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert
2013-03-01
Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman's GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. In Bellman's GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman's GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman's GAP as an implementation platform of 'real-world' bioinformatics tools. Bellman's GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics.
A review of data fusion techniques.
Castanedo, Federico
2013-01-01
The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion.
Social Circles Detection from Ego Network and Profile Information
2014-12-19
response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing... algorithm used to infer k-clique communities is expo- nential, which makes this technique unfeasible when treating egonets with a large number of users...atic when considering RBMs. This inconvenient was positively solved implementing a sparsity treatment with the RBM algorithm . (ii) The ground truth was
NASA Astrophysics Data System (ADS)
Paetz, Tim-Torben
2017-04-01
We characterize Cauchy data sets leading to vacuum space-times with vanishing Mars-Simon tensor. This approach provides an algorithmic procedure to check whether a given initial data set (Σ ,hi j,Ki j) evolves into a space-time which is locally isometric to a member of the Kerr-(A)(dS) family.
An outer approximation method for the road network design problem
2018-01-01
Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well. PMID:29590111
NASA Astrophysics Data System (ADS)
Goharian, E.; Gailey, R.; Maples, S.; Azizipour, M.; Sandoval Solis, S.; Fogg, G. E.
2017-12-01
The drought incidents and growing water scarcity in California have a profound effect on human, agricultural, and environmental water needs. California experienced multi-year droughts, which have caused groundwater overdraft and dropping groundwater levels, and dwindling of major reservoirs. These concerns call for a stringent evaluation of future water resources sustainability and security in the state. To answer to this call, Sustainable Groundwater Management Act (SGMA) was passed in 2014 to promise a sustainable groundwater management in California by 2042. SGMA refers to managed aquifer recharge (MAR) as a key management option, especially in areas with high variation in water availability intra- and inter-annually, to secure the refill of underground water storage and return of groundwater quality to a desirable condition. The hybrid optimization of an integrated water resources system provides an opportunity to adapt surface reservoir operations for enhancement in groundwater recharge. Here, to re-operate Folsom Reservoir, objectives are maximizing the storage in the whole American-Cosumnes watershed and maximizing hydropower generation from Folsom Reservoir. While a linear programing (LP) module tends to maximize the total groundwater recharge by distributing and spreading water over suitable lands in basin, a genetic based algorithm, Non-dominated Sorting Genetic Algorithm II (NSGA-II), layer above it controls releases from the reservoir to secure the hydropower generation, carry-over storage in reservoir, available water for replenishment, and downstream water requirements. The preliminary results show additional releases from the reservoir for groundwater recharge during high flow seasons. Moreover, tradeoffs between the objectives describe that new operation performs satisfactorily to increase the storage in the basin, with nonsignificant effects on other objectives.
An outer approximation method for the road network design problem.
Asadi Bagloee, Saeed; Sarvi, Majid
2018-01-01
Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well.
TargetSpy: a supervised machine learning approach for microRNA target prediction.
Sturm, Martin; Hackenberg, Michael; Langenberger, David; Frishman, Dmitrij
2010-05-28
Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org.
TargetSpy: a supervised machine learning approach for microRNA target prediction
2010-01-01
Background Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. Results We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences. In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Conclusion Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. PMID:20509939
High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual
NASA Technical Reports Server (NTRS)
Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.
2004-01-01
This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.
González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina
2017-01-01
Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage. PMID:28075364
González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina
2017-01-09
Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage.
GCOM-W soil moisture and temperature algorithms and validation
USDA-ARS?s Scientific Manuscript database
Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
Vega roll and attitude control system algorithms trade-off study
NASA Astrophysics Data System (ADS)
Paulino, N.; Cuciniello, G.; Cruciani, I.; Corraro, F.; Spallotta, D.; Nebula, F.
2013-12-01
This paper describes the trade-off study for the selection of the most suitable algorithms for the Roll and Attitude Control System (RACS) within the FPS-A program, aimed at developing the new Flight Program Software of VEGA Launcher. Two algorithms were analyzed: Switching Lines (SL) and Quaternion Feedback Regulation. Using a development simulation tool that models two critical flight phases (Long Coasting Phase (LCP) and Payload Release (PLR) Phase), both algorithms were assessed with Monte Carlo batch simulations for both of the phases. The statistical outcomes of the results demonstrate a 100 percent success rate for Quaternion Feedback Regulation, and support the choice of this method.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.
1989-01-01
The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; ...
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
Application of majority voting and consensus voting algorithms in N-version software
NASA Astrophysics Data System (ADS)
Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.
2018-05-01
N-version programming is one of the most common techniques which is used to improve the reliability of software by building in fault tolerance, redundancy and decreasing common cause failures. N different equivalent software versions are developed by N different and isolated workgroups by considering the same software specifications. The versions solve the same task and return results that have to be compared to determine the correct result. Decisions of N different versions are evaluated by a voting algorithm or the so-called voter. In this paper, two of the most commonly used software voting algorithms such as the majority voting algorithm and the consensus voting algorithm are studied. The distinctive features of Nversion programming with majority voting and N-version programming with consensus voting are described. These two algorithms make a decision about the correct result on the base of the agreement matrix. However, if the equivalence relation on the agreement matrix is not satisfied it is impossible to make a decision. It is shown that the agreement matrix can be transformed into an appropriate form by using the Boolean compositions when the equivalence relation is satisfied.
A ’Multiple Pivoting’ Algorithm for Goal-Interval Programming Formulations.
1980-03-01
jotso _P- ,- Research Report CCS 355 A "MULTIPLE PIVOTING" ALGORITHM FOR GOAL-INTERVAL PROGRAMMING FORMULATIONS by R. Armstrong* A. Charnes*W. Cook...J. Godfrey*** March 1980 *The University of Texas at Austin **York University, Downsview, Ontario, Canada ***Washington, DC This research was partly...areas. However, the main direction of goal programing research has been in formulating models instead of seeking procedures that would provide
Mixed-Integer Nonconvex Quadratic Optimization Relaxations and Performance Analysis
2016-10-11
Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization,” (W. Bian, X. Chen, and Ye), Math Programming, 149 (2015) 301-327...Chen, Ge, Wang, Ye), Math Programming, 143 (1-2) (2014) 371-383. This paper resolved an important open question in cardinality constrained...Statistical Performance, and Algorithmic Theory for Local Solutions,” (H. Liu, T. Yao, R. Li, Y. Ye) manuscript, 2nd revision in Math Programming
Engineering calculations for solving the orbital allotment problem
NASA Technical Reports Server (NTRS)
Reilly, C.; Walton, E. K.; Mount-Campbell, C.; Caldecott, R.; Aebker, E.; Mata, F.
1988-01-01
Four approaches for calculating downlink interferences for shaped-beam antennas are described. An investigation of alternative mixed-integer programming models for satellite synthesis is summarized. Plans for coordinating the various programs developed under this grant are outlined. Two procedures for ordering satellites to initialize the k-permutation algorithm are proposed. Results are presented for the k-permutation algorithms. Feasible solutions are found for 5 of the 6 problems considered. Finally, it is demonstrated that the k-permutation algorithm can be used to solve arc allotment problems.
A Kind of Nonlinear Programming Problem Based on Mixed Fuzzy Relation Equations Constraints
NASA Astrophysics Data System (ADS)
Li, Jinquan; Feng, Shuang; Mi, Honghai
In this work, a kind of nonlinear programming problem with non-differential objective function and under the constraints expressed by a system of mixed fuzzy relation equations is investigated. First, some properties of this kind of optimization problem are obtained. Then, a polynomial-time algorithm for this kind of optimization problem is proposed based on these properties. Furthermore, we show that this algorithm is optimal for the considered optimization problem in this paper. Finally, numerical examples are provided to illustrate our algorithms.
Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Bergmann, E.; Weiler, P.
1983-01-01
An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.
Strategic Control Algorithm Development : Volume 4A. Computer Program Report.
DOT National Transportation Integrated Search
1974-08-01
A description of the strategic algorithm evaluation model is presented, both at the user and programmer levels. The model representation of an airport configuration, environmental considerations, the strategic control algorithm logic, and the airplan...
Strategic Control Algorithm Development : Volume 4B. Computer Program Report (Concluded)
DOT National Transportation Integrated Search
1974-08-01
A description of the strategic algorithm evaluation model is presented, both at the user and programmer levels. The model representation of an airport configuration, environmental considerations, the strategic control algorithm logic, and the airplan...
Parallel transformation of K-SVD solar image denoising algorithm
NASA Astrophysics Data System (ADS)
Liang, Youwen; Tian, Yu; Li, Mei
2017-02-01
The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.
Beginning with the end in mind: cultivating minority nurse leaders.
Carter, Brigit Maria; Powell, Dorothy L; Derouin, Anne L; Cusatis, Julie
2015-01-01
In response to the need for increased racial and ethnic diversity in the nursing profession, the Duke University School of Nursing (DUSON) established the Making a Difference in Nursing II (MADIN II) Program. The aim of the MADIN II Program is to improve the diversity of the nursing workforce by expanding nursing education opportunities for economically disadvantaged underrepresented minority (URM) students to prepare for, enroll in, and graduate from the DUSON's Accelerated Bachelors of Science in Nursing program. Adapted from the highly successful Meyerhoff Scholarship Program model, the program is to cultivate URM nursing graduates with advanced knowledge and leadership skills who can address health disparities and positively influence health care issues currently plaguing underrepresented populations. The article discusses the MADIN II framework consisting of four unique components: recruitment of students, the Summer Socialization Nursing Preentry Program, the Continued Connectivity Program, and the Succeed to Excellence Program, providing a framework for other academic programs interested in cultivating a pipeline of minority nurse leaders. Copyright © 2015 Elsevier Inc. All rights reserved.
40 CFR 72.70 - Relationship to title V operating permit program.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.70 Relationship to... operating permit programs and acceptance of State Acid Rain programs, the procedure for including State Acid... of an accepted State program, to issue Phase II Acid Rain permits. (b) Relationship to operating...
40 CFR 72.70 - Relationship to title V operating permit program.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.70 Relationship to... operating permit programs and acceptance of State Acid Rain programs, the procedure for including State Acid... of an accepted State program, to issue Phase II Acid Rain permits. (b) Relationship to operating...
40 CFR 72.70 - Relationship to title V operating permit program.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.70 Relationship to... operating permit programs and acceptance of State Acid Rain programs, the procedure for including State Acid... of an accepted State program, to issue Phase II Acid Rain permits. (b) Relationship to operating...
40 CFR 72.70 - Relationship to title V operating permit program.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.70 Relationship to... operating permit programs and acceptance of State Acid Rain programs, the procedure for including State Acid... of an accepted State program, to issue Phase II Acid Rain permits. (b) Relationship to operating...
40 CFR 72.70 - Relationship to title V operating permit program.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Acid Rain Phase II Implementation § 72.70 Relationship to... operating permit programs and acceptance of State Acid Rain programs, the procedure for including State Acid... of an accepted State program, to issue Phase II Acid Rain permits. (b) Relationship to operating...
20 CFR 628.540 - Volunteer program.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR PROGRAMS UNDER TITLE II OF THE JOB TRAINING PARTNERSHIP ACT Program Design Requirements for Programs Under Title II of the Job Training Partnership Act § 628.540 Volunteer program. Pursuant to sections 204(c)(6) and 264(d)(7) of the...
This document describes the overall scope of the AEATF II program, demonstrates the need for additional human exposure monitoring data and explains the proposed methodology for the exposure monitoring studies proposed for conduct by the AEATF II.
40 CFR 147.3200 - Fort Peck Indian Reservation: Assiniboine & Sioux Tribes-Class II wells.
Code of Federal Regulations, 2014 CFR
2014-07-01
...: Assiniboine & Sioux Tribes-Class II wells. 147.3200 Section 147.3200 Protection of Environment ENVIRONMENTAL... INJECTION CONTROL PROGRAMS Assiniboine and Sioux Tribes § 147.3200 Fort Peck Indian Reservation: Assiniboine & Sioux Tribes—Class II wells. The UIC program for Class II injection wells on all lands within the...
40 CFR 147.3200 - Fort Peck Indian Reservation: Assiniboine & Sioux Tribes-Class II wells.
Code of Federal Regulations, 2012 CFR
2012-07-01
...: Assiniboine & Sioux Tribes-Class II wells. 147.3200 Section 147.3200 Protection of Environment ENVIRONMENTAL... INJECTION CONTROL PROGRAMS Assiniboine and Sioux Tribes § 147.3200 Fort Peck Indian Reservation: Assiniboine & Sioux Tribes—Class II wells. The UIC program for Class II injection wells on all lands within the...
40 CFR 147.3200 - Fort Peck Indian Reservation: Assiniboine & Sioux Tribes-Class II wells.
Code of Federal Regulations, 2013 CFR
2013-07-01
...: Assiniboine & Sioux Tribes-Class II wells. 147.3200 Section 147.3200 Protection of Environment ENVIRONMENTAL... INJECTION CONTROL PROGRAMS Assiniboine and Sioux Tribes § 147.3200 Fort Peck Indian Reservation: Assiniboine & Sioux Tribes—Class II wells. The UIC program for Class II injection wells on all lands within the...
Modified Polar-Format Software for Processing SAR Data
NASA Technical Reports Server (NTRS)
Chen, Curtis
2003-01-01
HMPF is a computer program that implements a modified polar-format algorithm for processing data from spaceborne synthetic-aperture radar (SAR) systems. Unlike prior polar-format processing algorithms, this algorithm is based on the assumption that the radar signal wavefronts are spherical rather than planar. The algorithm provides for resampling of SAR pulse data from slant range to radial distance from the center of a reference sphere that is nominally the local Earth surface. Then, invoking the projection-slice theorem, the resampled pulse data are Fourier-transformed over radial distance, arranged in the wavenumber domain according to the acquisition geometry, resampled to a Cartesian grid, and inverse-Fourier-transformed. The result of this process is the focused SAR image. HMPF, and perhaps other programs that implement variants of the algorithm, may give better accuracy than do prior algorithms for processing strip-map SAR data from high altitudes and may give better phase preservation relative to prior polar-format algorithms for processing spotlight-mode SAR data.
XTALOPT: An open-source evolutionary algorithm for crystal structure prediction
NASA Astrophysics Data System (ADS)
Lonie, David C.; Zurek, Eva
2011-02-01
The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely available to the scientific community for use and collaboration under the GNU Public License. Running time: User dependent. The program runs until stopped by the user.
Firefly Mating Algorithm for Continuous Optimization Problems
Ritthipakdee, Amarita; Premasathian, Nol; Jitkongchuen, Duangjai
2017-01-01
This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima. PMID:28808442
Firefly Mating Algorithm for Continuous Optimization Problems.
Ritthipakdee, Amarita; Thammano, Arit; Premasathian, Nol; Jitkongchuen, Duangjai
2017-01-01
This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima.
Supernova Photometric Lightcurve Classification
NASA Astrophysics Data System (ADS)
Zaidi, Tayeb; Narayan, Gautham
2016-01-01
This is a preliminary report on photometric supernova classification. We first explore the properties of supernova light curves, and attempt to restructure the unevenly sampled and sparse data from assorted datasets to allow for processing and classification. The data was primarily drawn from the Dark Energy Survey (DES) simulated data, created for the Supernova Photometric Classification Challenge. This poster shows a method for producing a non-parametric representation of the light curve data, and applying a Random Forest classifier algorithm to distinguish between supernovae types. We examine the impact of Principal Component Analysis to reduce the dimensionality of the dataset, for future classification work. The classification code will be used in a stage of the ANTARES pipeline, created for use on the Large Synoptic Survey Telescope alert data and other wide-field surveys. The final figure-of-merit for the DES data in the r band was 60% for binary classification (Type I vs II).Zaidi was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program (AST-1262829).
SASfit: a tool for small-angle scattering data analysis using a library of analytical expressions.
Breßler, Ingo; Kohlbrecher, Joachim; Thünemann, Andreas F
2015-10-01
SASfit is one of the mature programs for small-angle scattering data analysis and has been available for many years. This article describes the basic data processing and analysis workflow along with recent developments in the SASfit program package (version 0.94.6). They include (i) advanced algorithms for reduction of oversampled data sets, (ii) improved confidence assessment in the optimized model parameters and (iii) a flexible plug-in system for custom user-provided models. A scattering function of a mass fractal model of branched polymers in solution is provided as an example for implementing a plug-in. The new SASfit release is available for major platforms such as Windows, Linux and MacOS. To facilitate usage, it includes comprehensive indexed documentation as well as a web-based wiki for peer collaboration and online videos demonstrating basic usage. The use of SASfit is illustrated by interpretation of the small-angle X-ray scattering curves of monomodal gold nanoparticles (NIST reference material 8011) and bimodal silica nanoparticles (EU reference material ERM-FD-102).
The Snapshot A Star SurveY (SASSY)
NASA Astrophysics Data System (ADS)
Garani, Jasmine I.; Nielsen, Eric; Marchis, Franck; Liu, Michael C.; Macintosh, Bruce; Rajan, Abhijith; De Rosa, Robert J.; Jinfei Wang, Jason; Esposito, Thomas M.; Best, William M. J.; Bowler, Brendan; Dupuy, Trent; Ruffio, Jean-Baptiste
2018-01-01
The Snapshot A Star Survey (SASSY) is an adaptive optics survey conducted using NIRC2 on the Keck II telescope to search for young, self-luminous planets and brown dwarfs (M > 5MJup) around high mass stars (M > 1.5 M⊙). We present the results of a custom data reduction pipeline developed for the coronagraphic observations of our 200 target stars. Our data analysis method includes basic near infrared data processing (flat-field correction, bad pixel removal, distortion correction) as well as performing PSF subtraction through a Reference Differential Imaging algorithm based on a library of PSFs derived from the observations using the pyKLIP routine. We present the results from the pipeline of a few stars from the survey with analysis of candidate companions. SASSY is sensitive to companions 600,000 times fainter than the host star withint the inner few arcseconds, allowing us to detect companions with masses ~8MJup at age 110 Myr. This work was supported by the Leadership Alliance's Summer Research Early Identification Program at Stanford University, the NSF REU program at the SETI Institute and NASA grant NNX14AJ80G.
DEGAS: Dynamic Exascale Global Address Space Programming Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demmel, James
The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less
Accurate multiple sequence-structure alignment of RNA sequences using combinatorial optimization.
Bauer, Markus; Klau, Gunnar W; Reinert, Knut
2007-07-27
The discovery of functional non-coding RNA sequences has led to an increasing interest in algorithms related to RNA analysis. Traditional sequence alignment algorithms, however, fail at computing reliable alignments of low-homology RNA sequences. The spatial conformation of RNA sequences largely determines their function, and therefore RNA alignment algorithms have to take structural information into account. We present a graph-based representation for sequence-structure alignments, which we model as an integer linear program (ILP). We sketch how we compute an optimal or near-optimal solution to the ILP using methods from combinatorial optimization, and present results on a recently published benchmark set for RNA alignments. The implementation of our algorithm yields better alignments in terms of two published scores than the other programs that we tested: This is especially the case with an increasing number of input sequences. Our program LARA is freely available for academic purposes from http://www.planet-lisa.net.
Automatic Data Distribution for CFD Applications on Structured Grids
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Yan, Jerry
2000-01-01
Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAFT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAFT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.
Automatic Data Distribution for CFD Applications on Structured Grids
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Yan, Jerry
1999-01-01
Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAPT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAPT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.
Genomes to natural products PRediction Informatics for Secondary Metabolomes (PRISM).
Skinnider, Michael A; Dejong, Chris A; Rees, Philip N; Johnston, Chad W; Li, Haoxin; Webster, Andrew L H; Wyatt, Morgan A; Magarvey, Nathan A
2015-11-16
Microbial natural products are an invaluable source of evolved bioactive small molecules and pharmaceutical agents. Next-generation and metagenomic sequencing indicates untapped genomic potential, yet high rediscovery rates of known metabolites increasingly frustrate conventional natural product screening programs. New methods to connect biosynthetic gene clusters to novel chemical scaffolds are therefore critical to enable the targeted discovery of genetically encoded natural products. Here, we present PRISM, a computational resource for the identification of biosynthetic gene clusters, prediction of genetically encoded nonribosomal peptides and type I and II polyketides, and bio- and cheminformatic dereplication of known natural products. PRISM implements novel algorithms which render it uniquely capable of predicting type II polyketides, deoxygenated sugars, and starter units, making it a comprehensive genome-guided chemical structure prediction engine. A library of 57 tailoring reactions is leveraged for combinatorial scaffold library generation when multiple potential substrates are consistent with biosynthetic logic. We compare the accuracy of PRISM to existing genomic analysis platforms. PRISM is an open-source, user-friendly web application available at http://magarveylab.ca/prism/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Ruan, Junhu; Wang, Xuping; Shi, Yan
2014-01-01
We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs) and assign medial aid points (MAPs) to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i) More TDCs often increase the efficiency and utility of medical supplies; (ii) It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii) The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is. PMID:25350005
Evaluation of a Text Compression Algorithm Against Computer-Aided Instruction (CAI) Material.
ERIC Educational Resources Information Center
Knight, Joseph M., Jr.
This report describes the initial evaluation of a text compression algorithm against computer assisted instruction (CAI) material. A review of some concepts related to statistical text compression is followed by a detailed description of a practical text compression algorithm. A simulation of the algorithm was programed and used to obtain…
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
NASA Astrophysics Data System (ADS)
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
Koneru, Jayanthi N; Swerdlow, Charles D; Ploux, Sylvain; Sharma, Parikshit S; Kaszala, Karoly; Tan, Alex Y; Huizar, Jose F; Vijayaraman, Pugazhendi; Kenigsberg, David; Ellenbogen, Kenneth A
2017-02-01
Implantable cardioverter defibrillators (ICDs) must establish a balance between delivering appropriate shocks for ventricular tachyarrhythmias and withholding inappropriate shocks for lead-related oversensing ("noise"). To improve the specificity of ICD therapy, manufacturers have developed proprietary algorithms that detect lead noise. The SecureSense TM RV Lead Noise discrimination (St. Jude Medical, St. Paul, MN, USA) algorithm is designed to differentiate oversensing due to lead failure from ventricular tachyarrhythmias and withhold therapies in the presence of sustained lead-related oversensing. We report 5 patients in whom appropriate ICD therapy was withheld due to the operation of the SecureSense algorithm and explain the mechanism for inhibition of therapy in each case. Limitations of algorithms designed to increase ICD therapy specificity, especially for the SecureSense algorithm, are analyzed. The SecureSense algorithm can withhold appropriate therapies for ventricular arrhythmias due to design and programming limitations. Electrophysiologists should have a thorough understanding of the SecureSense algorithm before routinely programming it and understand the implications for ventricular arrhythmia misclassification. © 2016 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Streit, Samuel Allen
The aim of this study is to trace how the Title II-C program has facilitated scholarly access to research materials across the United States. It is intended to give evidence of the importance of the Title II-C Program to libraries' efforts toward developing, preserving, and sharing their resources with the nation's scholars. The study consists of…
Fan, Mingyi; Li, Tongjun; Hu, Jiwei; Cao, Rensheng; Wei, Xionghui; Shi, Xuedan; Ruan, Wenqian
2017-01-01
Reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites were synthesized in the present study by chemical deposition method and were then characterized by various methods, such as Fourier-transform infrared spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS). The nZVI/rGO composites prepared were utilized for Cd(II) removal from aqueous solutions in batch mode at different initial Cd(II) concentrations, initial pH values, contact times, and operating temperatures. Response surface methodology (RSM) and artificial neural network hybridized with genetic algorithm (ANN-GA) were used for modeling the removal efficiency of Cd(II) and optimizing the four removal process variables. The average values of prediction errors for the RSM and ANN-GA models were 6.47% and 1.08%. Although both models were proven to be reliable in terms of predicting the removal efficiency of Cd(II), the ANN-GA model was found to be more accurate than the RSM model. In addition, experimental data were fitted to the Langmuir, Freundlich, and Dubinin-Radushkevich (D-R) isotherms. It was found that the Cd(II) adsorption was best fitted to the Langmuir isotherm. Examination on thermodynamic parameters revealed that the removal process was spontaneous and exothermic in nature. Furthermore, the pseudo-second-order model can better describe the kinetics of Cd(II) removal with a good R2 value than the pseudo-first-order model. PMID:28772901
Fan, Mingyi; Li, Tongjun; Hu, Jiwei; Cao, Rensheng; Wei, Xionghui; Shi, Xuedan; Ruan, Wenqian
2017-05-17
Reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites were synthesized in the present study by chemical deposition method and were then characterized by various methods, such as Fourier-transform infrared spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS). The nZVI/rGO composites prepared were utilized for Cd(II) removal from aqueous solutions in batch mode at different initial Cd(II) concentrations, initial pH values, contact times, and operating temperatures. Response surface methodology (RSM) and artificial neural network hybridized with genetic algorithm (ANN-GA) were used for modeling the removal efficiency of Cd(II) and optimizing the four removal process variables. The average values of prediction errors for the RSM and ANN-GA models were 6.47% and 1.08%. Although both models were proven to be reliable in terms of predicting the removal efficiency of Cd(II), the ANN-GA model was found to be more accurate than the RSM model. In addition, experimental data were fitted to the Langmuir, Freundlich, and Dubinin-Radushkevich (D-R) isotherms. It was found that the Cd(II) adsorption was best fitted to the Langmuir isotherm. Examination on thermodynamic parameters revealed that the removal process was spontaneous and exothermic in nature. Furthermore, the pseudo-second-order model can better describe the kinetics of Cd(II) removal with a good R² value than the pseudo-first-order model.
Bellucci, Michael A; Coker, David F
2011-07-28
We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics
ERIC Educational Resources Information Center
National Field Research Center Inc., Iowa City, IA.
This report, together with volume II, (multiple degree programs), detail 105 post-secondary wastewater treatment programs from 33 states. These programs represent a sample, only, of the various programs available nationwide. Enrollment and graduate statistics are presented. The total number of faculty involved in all the programs surveyed was…
Code of Federal Regulations, 2010 CFR
2010-04-01
... Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR PROGRAMS UNDER TITLE II OF THE JOB TRAINING PARTNERSHIP ACT Program Design Requirements for Programs Under Title II of the Job Training Partnership Act § 628.525 Limitations. Neither eligibility for nor participation in a JTPA program...
Runtime Analysis of Linear Temporal Logic Specifications
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus
2001-01-01
This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Global Climate Monitoring with the EOS PM-Platform's Advanced Microwave Scanning Radiometer (AMSR-E)
NASA Technical Reports Server (NTRS)
Spencer, Roy W.
2002-01-01
The Advanced Microwave Scanning 2 Radiometer (AMSR-E) is being built by NASDA to fly on NASA's PM Platform (now called Aqua) in December 2000. This is in addition to a copy of AMSR that will be launched on Japan's ADEOS-II satellite in 2001. The AMSRs improve upon the window frequency radiometer heritage of the SSM/I and SMMR instruments. Major improvements over those instruments include channels spanning the 6.9 GHz to 89 GHz frequency range, and higher spatial resolution from a 1.6 m reflector (AMSR-E) and 2.0 m reflector (ADEOS-II AMSR). The ADEOS-II AMSR also will have 50.3 and 52.8 GHz channels, providing sensitivity to lower tropospheric temperature. NASA funds an AMSR-E Science Team to provide algorithms for the routine production of a number of standard geophysical products. These products will be generated by the AMSR-E Science Investigator-led Processing System (SIPS) at the Global Hydrology Resource Center (GHRC) in Huntsville, Alabama. While there is a separate NASDA-sponsored activity to develop algorithms and produce products from AMSR, as well as a Joint (NASDA-NASA) AMSR Science Team 3 activity, here I will review only the AMSR-E Team's algorithms and how they benefit from the new capabilities that AMSR-E will provide. The US Team's products will be archived at the National Snow and Ice Data Center (NSIDC).
Keivanian, Farshid; Mehrshad, Nasser; Bijari, Abolfazl
2016-01-01
D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mathematical backgrounds are presented in this study. The optimum values for the design parameters of MOSFETs channel widths and power supply are discovered. Based on them the power delay product quantity (PDP) is 6.32 PJ at 125 MHz Clock Frequency, L = 0.18 µm, and T = 27 °C.
Automatic classification of protein structures relying on similarities between alignments
2012-01-01
Background Identification of protein structural cores requires isolation of sets of proteins all sharing a same subset of structural motifs. In the context of an ever growing number of available 3D protein structures, standard and automatic clustering algorithms require adaptations so as to allow for efficient identification of such sets of proteins. Results When considering a pair of 3D structures, they are stated as similar or not according to the local similarities of their matching substructures in a structural alignment. This binary relation can be represented in a graph of similarities where a node represents a 3D protein structure and an edge states that two 3D protein structures are similar. Therefore, classifying proteins into structural families can be viewed as a graph clustering task. Unfortunately, because such a graph encodes only pairwise similarity information, clustering algorithms may include in the same cluster a subset of 3D structures that do not share a common substructure. In order to overcome this drawback we first define a ternary similarity on a triple of 3D structures as a constraint to be satisfied by the graph of similarities. Such a ternary constraint takes into account similarities between pairwise alignments, so as to ensure that the three involved protein structures do have some common substructure. We propose hereunder a modification algorithm that eliminates edges from the original graph of similarities and gives a reduced graph in which no ternary constraints are violated. Our approach is then first to build a graph of similarities, then to reduce the graph according to the modification algorithm, and finally to apply to the reduced graph a standard graph clustering algorithm. Such method was used for classifying ASTRAL-40 non-redundant protein domains, identifying significant pairwise similarities with Yakusa, a program devised for rapid 3D structure alignments. Conclusions We show that filtering similarities prior to standard graph based clustering process by applying ternary similarity constraints i) improves the separation of proteins of different classes and consequently ii) improves the classification quality of standard graph based clustering algorithms according to the reference classification SCOP. PMID:22974051
ERIC Educational Resources Information Center
Office of Education (DHEW), Washington, DC.
This is the ninth report describing notable reading projects funded under Title II of the Elementary and Secondary Education Act. Projects combining Title II reading projects with a give-away book program in Alabama, Illinois, Indiana, Maryland, Massachusetts, and New Jersey are described. Although Title II funds cannot be used to provide books to…
Ziacchi, Matteo; Palmisano, Pietro; Biffi, Mauro; Ricci, Renato P; Landolina, Maurizio; Zoni-Berisso, Massimo; Occhetta, Eraldo; Maglia, Giampiero; Botto, Gianluca; Padeletti, Luigi; Boriani, Giuseppe
2018-04-01
: Modern pacemakers have an increasing number of programable parameters and specific algorithms designed to optimize pacing therapy in relation to the individual characteristics of patients. When choosing the most appropriate pacemaker type and programing, the following variables must be taken into account: the type of bradyarrhythmia at the time of pacemaker implantation; the cardiac chamber requiring pacing, and the percentage of pacing actually needed to correct the rhythm disorder; the possible association of multiple rhythm disturbances and conduction diseases; the evolution of conduction disorders during follow-up. The goals of device programing are to preserve or restore the heart rate response to metabolic and hemodynamic demands; to maintain physiological conduction; to maximize device longevity; to detect, prevent, and treat atrial arrhythmia. In patients with sinus node disease, the optimal pacing mode is DDDR. Based on all the available evidence, in this setting, we consider appropriate the activation of the following algorithms: rate responsive function in patients with chronotropic incompetence; algorithms to maximize intrinsic atrioventricular conduction in the absence of atrioventricular blocks; mode-switch algorithms; algorithms for autoadaptive management of the atrial pacing output; algorithms for the prevention and treatment of atrial tachyarrhythmias in the subgroup of patients with atrial tachyarrhythmias/atrial fibrillation. The purpose of this two-part consensus document is to provide specific suggestions (based on an extensive literature review) on appropriate pacemaker setting in relation to patients' clinical features.
ERIC Educational Resources Information Center
Vrachnos, Euripides; Jimoyiannis, Athanassios
2017-01-01
Developing students' algorithmic and computational thinking is currently a major objective for primary and secondary education in many countries around the globe. Literature suggests that students face at various difficulties in programming processes, because of their mental models about basic programming constructs. Arrays constitute the first…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhr, L.
1987-01-01
This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.
NASA Technical Reports Server (NTRS)
Dupnick, E.; Wiggins, D.
1980-01-01
The scheduling algorithm for mission planning and logistics evaluation (SAMPLE) is presented. Two major subsystems are included: The mission payloads program; and the set covering program. Formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.
Algorithms and a short description of the D1_Flow program for numerical modeling of one-dimensional steady-state flow in horizontally heterogeneous aquifers with uneven sloping bases are presented. The algorithms are based on the Dupuit-Forchheimer approximations. The program per...
Validation of the GCOM-W SCA and JAXA soil moisture algorithms
USDA-ARS?s Scientific Manuscript database
Satellite-based remote sensing of soil moisture has matured over the past decade as a result of the Global Climate Observing Mission-Water (GCOM-W) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...
Plagiarism Detection Algorithm for Source Code in Computer Science Education
ERIC Educational Resources Information Center
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
An Optimal Algorithm towards Successive Location Privacy in Sensor Networks with Dynamic Programming
NASA Astrophysics Data System (ADS)
Zhao, Baokang; Wang, Dan; Shao, Zili; Cao, Jiannong; Chan, Keith C. C.; Su, Jinshu
In wireless sensor networks, preserving location privacy under successive inference attacks is extremely critical. Although this problem is NP-complete in general cases, we propose a dynamic programming based algorithm and prove it is optimal in special cases where the correlation only exists between p immediate adjacent observations.
A Review of Data Fusion Techniques
2013-01-01
The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion. PMID:24288502
NASA Astrophysics Data System (ADS)
Vuong, Q. L.; Rigaut, C.; Gossuin, Y.
2018-07-01
A programming project for undergraduate students in physics is proposed in this work. Its goal is to check the Snell–Descartes law of refraction using the Fermat principle and the ant colony optimization algorithm. The project involves basic mathematics and physics and is adapted to students with basic programming skills. More advanced tools can be used (but are not mandatory) as parallelization or object-oriented programming, which makes the project also suitable for more experienced students. We propose two tests to validate the program. Our algorithm is able to find solutions which are close to the theoretical predictions. Two quantities are defined to study its convergence and the quality of the solutions. It is also shown that the choice of the values of the simulation parameters is important to efficiently obtain precise results.
Carrier-to-noise power estimation for the Block 5 Receiver
NASA Technical Reports Server (NTRS)
Monk, A. M.
1991-01-01
Two possible algorithms for the carrier to noise power (P sub c/N sub 0) estimation in the Block V Receiver are analyzed and their performances compared. The expected value and the variance of each estimator algorithm are derived. The two algorithms examined are known as the I arm estimator, which relies on samples from only the in-phase arm of the digital phase lock loop, and the IQ arm estimator, which uses both in-phase and quadrature-phase arm signals. The IQ arm algorithm is currently implemented in the Advanced Receiver II (ARX II). Both estimators are biased. The performance degradation due to phase jitter in the carrier tracking loop is taken into account. Curves of the expected value and the signal to noise ratio of the P sub c/N sub 0 estimators vs. actual P sub c/N sub 0 are shown. From this, it is clear that the I arm estimator performs better than the IQ arm estimator when the data to noise power ratio (P sub d/N sub 0) is high, i.e., at high P sub c/N sub 0 values and a significant modulation index. When P sub d/N sub 0 is low, the two estimators have essentially the same performance.
Fang, Simin; Zhou, Sheng; Wang, Xiaochun; Ye, Qingsheng; Tian, Ling; Ji, Jianjun; Wang, Yanqun
2015-01-01
To design and improve signal processing algorithms of ophthalmic ultrasonography based on FPGA. Achieved three signal processing modules: full parallel distributed dynamic filter, digital quadrature demodulation, logarithmic compression, using Verilog HDL hardware language in Quartus II. Compared to the original system, the hardware cost is reduced, the whole image shows clearer and more information of the deep eyeball contained in the image, the depth of detection increases from 5 cm to 6 cm. The new algorithms meet the design requirements and achieve the system's optimization that they can effectively improve the image quality of existing equipment.
40 CFR 147.50 - State-administered program-Class II wells.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Carry Out Underground Injection Control Program Relating to Class II Wells as Described in Federal Safe... PROGRAMS (CONTINUED) STATE, TRIBAL, AND EPA-ADMINISTERED UNDERGROUND INJECTION CONTROL PROGRAMS Alabama... application: (a) Incorporation by reference. The requirements set forth in the State statutes and regulations...
An effective algorithm for calculating the Chandrasekhar function
NASA Astrophysics Data System (ADS)
Jablonski, A.
2012-08-01
Numerical values of the Chandrasekhar function are needed with high accuracy in evaluations of theoretical models describing electron transport in condensed matter. An algorithm for such calculations should be possibly fast and also accurate, e.g. an accuracy of 10 decimal digits is needed for some applications. Two of the integral representations of the Chandrasekhar function are prospective for constructing such an algorithm, but suitable transformations are needed to obtain a rapidly converging quadrature. A mixed algorithm is proposed in which the Chandrasekhar function is calculated from two algorithms, depending on the value of one of the arguments. Catalogue identifier: AEMC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 567 No. of bytes in distributed program, including test data, etc.: 4444 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a FORTRAN 90 compiler Operating system: Linux, Windows 7, Windows XP RAM: 0.6 Mb Classification: 2.4, 7.2 Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two algorithms and by selecting ranges of the argument ω in which performance is the fastest. Restrictions: Two input parameters for the Chandrasekhar function, x and ω (notation used in the code), are restricted to the range: 0⩽x⩽1 and 0⩽ω⩽1, which is sufficient in numerous applications. Unusual features: The program uses the Romberg quadrature for integration. This quadrature is applicable to integrands that satisfy several requirements (the integrand does not vary rapidly and does not change sign in the integration interval; furthermore, the integrand is finite at the endpoints). Consequently, the analyzed integrands were transformed so that these requirements were satisfied. In effect, one can conveniently control the accuracy of integration. Although the desired fractional accuracy was set at 10-10, the obtained accuracy of the Chandrasekhar function was much higher, typically 13 decimal places. Running time: Between 0.7 and 5 milliseconds for one pair of arguments of the Chandrasekhar function.
Reliability analysis of laminated CMC components through shell subelement techniques
NASA Technical Reports Server (NTRS)
Starlinger, A.; Duffy, S. F.; Gyekenyesi, J. P.
1992-01-01
An updated version of the integrated design program C/CARES (composite ceramic analysis and reliability evaluation of structures) was developed for the reliability evaluation of CMC laminated shell components. The algorithm is now split in two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The new interface program from the finite-element code MARC also includes the option of using hybrid laminates and allows for variations in temperature fields throughout the component.
STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations
NASA Technical Reports Server (NTRS)
Shah, S. N.
1981-01-01
The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.
Zhang, Huaguang; Song, Ruizhuo; Wei, Qinglai; Zhang, Tieyan
2011-12-01
In this paper, a novel heuristic dynamic programming (HDP) iteration algorithm is proposed to solve the optimal tracking control problem for a class of nonlinear discrete-time systems with time delays. The novel algorithm contains state updating, control policy iteration, and performance index iteration. To get the optimal states, the states are also updated. Furthermore, the "backward iteration" is applied to state updating. Two neural networks are used to approximate the performance index function and compute the optimal control policy for facilitating the implementation of HDP iteration algorithm. At last, we present two examples to demonstrate the effectiveness of the proposed HDP iteration algorithm.
SAO Participation in the GOME and SCIAMACHY Satellite Instrument Programs
NASA Technical Reports Server (NTRS)
Hilsenrath, Ernest (Technical Monitor); Chance, Kelly; Kurosu, Thomas
2004-01-01
This report summarizes the progress on our three-year program of research to refine the measurement capability for satellite-based instruments that monitor ozone and other trace species in the Earth's stratosphere and troposphere, to retrieve global distributions of these and other constituents h m the GOME and SCIAMACHY satellite instruments, and to conduct scientific studies for the ILAS instruments. This continues our involvements as a U.S. participant in GOME and SCIAMACHY since their inception, and as a member of the ILAS-II Science Team. These programs have led to the launch of the first satellite instrument specifically designed to measure height-resolved ozone, including the tropospheric component (GOME), and the development of the first satellite instrument that will measure tropospheric ozone simultaneously with NO2, CO, HCHO, N2O, H2O, and CH4 (SCIAMACHY). The GOME program now includes the GOME-2 instruments, to be launched on the Eumetsat Metop satellites, providing long-term continuity in European measurements of global ozone that complement the measurements of the TOMS, SBUV, OMI, OMPS instruments. The research primarily focuses on two areas: Data analysis, including algorithm development and validation studies that will improve the quality of retrieved data products, in support for future field campaigns (to complement in situ and airborne campaigns with satellite measurements), and scientific analyses to be interfaced to atmospheric modeling studies.
SAO Participation in the GOME and SCIAMACHY Satellite Instrument Programs
NASA Technical Reports Server (NTRS)
Chance, Kelly; Kurosu, Thomas
2003-01-01
This report summarizes the progress on our three-year program of research to refine the measurement capability for satellite-based instruments that monitor ozone and other trace species in the Earth's stratosphere and troposphere, to retrieve global distributions of these and other constituents from the GOME and SCIAMACHY satellite instruments, and to conduct scientific studies for the ILAS instruments. This continues our involvements as a U.S. participant in GOME and SCIAMACHY since their inception, and as a member of the ILAS-II Science Team. These programs have led to the launch of the first satellite instrument specifically designed to measure height-resolved ozone, including the tropospheric component (GOME), and the development of the first satellite instrument that will measure tropospheric ozone simultaneously with NO2, CO, HCHO, N2O, H2O, and CH4 (SCIAMACHY). The GOME program now includes the GOME-2 instruments, to be launched on the Eumetsat Metop satellites, providing long-term continuity in European measurements of global ozone that complement the measurements of the TOMS, SBW, OMI, OMPS instruments. The research primarily focuses on two areas: Data analysis, including algorithm development and validation studies that will improve the quality of retrieved data products, in support for future field campaigns (to complement in situ and airborne campaigns with satellite measurements), and scientific analyses to be interfaced to atmospheric modeling studies.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO
Zhang, Chaozhu; Han, Jinan; Li, Ke
2014-01-01
The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abulencia, A.; /Illinois U., Urbana; Adelman, J.
2007-01-01
The authors report on measurements of the inclusive jet production cross section as a function of the jet transverse momentum in p{bar p} collisions at {radical}s = 1.96 TeV, using the k{sub T} algorithm and a data sample corresponding to 1.0 fb{sup -1} collected with the Collider Detector at Fermilab in Run II. The measurements are carried out in five different jet rapidity regions with |y{sup jet}| < 2.1 and transverse momentum in the range 54 < p{sub T}{sup jet} < 700 GeV/c. Next-to-leading order perturbative QCD predictions are in good agreement with the measured cross sections.
Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam
2013-10-01
Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.
ERIC Educational Resources Information Center
National Field Research Center Inc., Iowa City, IA.
This report, together with volume I (single degree programs), detail 105 post-secondary wastewater treatment programs from 33 states. These programs represent a sample, only, of the various programs available nationwide. Enrollment and graduate statistics are presented. The total number of faculty involved in all the programs surveyed was 1,106;…
Salem, Rany M; Wessel, Jennifer; Schork, Nicholas J
2005-03-01
Interest in the assignment and frequency analysis of haplotypes in samples of unrelated individuals has increased immeasurably as a result of the emphasis placed on haplotype analyses by, for example, the International HapMap Project and related initiatives. Although there are many available computer programs for haplotype analysis applicable to samples of unrelated individuals, many of these programs have limitations and/or very specific uses. In this paper, the key features of available haplotype analysis software for use with unrelated individuals, as well as pooled DNA samples from unrelated individuals, are summarised. Programs for haplotype analysis were identified through keyword searches on PUBMED and various internet search engines, a review of citations from retrieved papers and personal communications, up to June 2004. Priority was given to functioning computer programs, rather than theoretical models and methods. The available software was considered in light of a number of factors: the algorithm(s) used, algorithm accuracy, assumptions, the accommodation of genotyping error, implementation of hypothesis testing, handling of missing data, software characteristics and web-based implementations. Review papers comparing specific methods and programs are also summarised. Forty-six haplotyping programs were identified and reviewed. The programs were divided into two groups: those designed for individual genotype data (a total of 43 programs) and those designed for use with pooled DNA samples (a total of three programs). The accuracy of programs using various criteria are assessed and the programs are categorised and discussed in light of: algorithm and method, accuracy, assumptions, genotyping error, hypothesis testing, missing data, software characteristics and web implementation. Many available programs have limitations (eg some cannot accommodate missing data) and/or are designed with specific tasks in mind (eg estimating haplotype frequencies rather than assigning most likely haplotypes to individuals). It is concluded that the selection of an appropriate haplotyping program for analysis purposes should be guided by what is known about the accuracy of estimation, as well as by the limitations and assumptions built into a program.
A Comparative Study of Optimization Algorithms for Engineering Synthesis.
1983-03-01
the ADS program demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular...demonstrates the flexibility a design engineer would have in selecting an optimization algorithm best suited to solve a particular problem. 4 TABLE OF...algorithm to suit a particular problem. The ADS library of design optimization algorithms was . developed by Vanderplaats in response to the first
Observations on Student Misconceptions--A Case Study of the Build-Heap Algorithm
ERIC Educational Resources Information Center
Seppala, Otto; Malmi, Lauri; Korhonen, Ari
2006-01-01
Data structures and algorithms are core issues in computer programming. However, learning them is challenging for most students and many of them have various types of misconceptions on how algorithms work. In this study, we discuss the problem of identifying misconceptions on the principles of how algorithms work. Our context is algorithm…
In-Trail Procedure (ITP) Algorithm Design
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Siminiceanu, Radu I.
2007-01-01
The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.
CMIF ECLS system test findings
NASA Technical Reports Server (NTRS)
Schunk, Richard G.; Carrasquillo, Robyn L.; Ogle, Kathyrn Y.; Wieland, Paul O.; Bagdigian, Robert M.
1989-01-01
During 1987 three Space Station integrated Environmental Control and Life Support System (ECLSS) tests were conducted at the Marshall Space Flight Center (MSFC) Core Module Integration Facility (CMIF) as part of the MSFC ECLSS Phase II test program. The three tests ranged in duration from 50 to 150 hours and were conducted inside of the CMIF module simulator. The Phase II partial integrated system test configuration consisted of four regenerative air revitalization subsystems and one regenerative water reclamation subsystem. This paper contains a discussion of results and lessons learned from the Phase II test program. The design of the Phase II test configuration and improvements made throughout the program are detailed. Future plans for the MSFC CMIF test program are provided, including an overview of planned improvements for the Phase III program.
INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P
2012-10-01
It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less
Deeper Insights into the Circumgalactic Medium using Multivariate Analysis Methods
NASA Astrophysics Data System (ADS)
Lewis, James; Churchill, Christopher W.; Nielsen, Nikole M.; Kacprzak, Glenn
2017-01-01
Drawing from a database of galaxies whose surrounding gas has absorption from MgII, called the MgII-Absorbing Galaxy Catalog (MAGIICAT, Neilsen et al 2013), we studied the circumgalactic medium (CGM) for a sample of 47 galaxies. Using multivariate analysis, in particular the k-means clustering algorithm, we determined that simultaneously examining column density (N), rest-frame B-K color, virial mass, and azimuthal angle (the projected angle between the galaxy major axis and the quasar line of sight) yields two distinct populations: (1) bluer, lower mass galaxies with higher column density along the minor axis, and (2) redder, higher mass galaxies with lower column density along the major axis. We support this grouping by running (i) two-sample, two-dimensional Kolmogorov-Smirnov (KS) tests on each of the six bivariate planes and (ii) two-sample KS tests on each of the four variables to show that the galaxies significantly cluster into two independent populations. To account for the fact that 16 of our 47 galaxies have upper limits on N, we performed Monte-Carlo tests whereby we replaced upper limits with random deviates drawn from a Schechter distribution fit, f(N). These tests strengthen the results of the KS tests. We examined the behavior of the MgII λ2796 absorption line equivalent width and velocity width for each galaxy population. We find that equivalent width and velocity width do not show similar characteristic distinctions between the two galaxy populations. We discuss the k-means clustering algorithm for optimizing the analysis of populations within datasets as opposed to using arbitrary bivariate subsample cuts. We also discuss the power of the k-means clustering algorithm in extracting deeper physical insight into the CGM in relationship to host galaxies.
Delorey, Mark J.; Replogle, Adam; Sexton, Christopher; Schriefer, Martin E.
2017-01-01
ABSTRACT The recommended laboratory diagnostic approach for Lyme disease is a standard two-tiered testing (STTT) algorithm where the first tier is typically an enzyme immunoassay (EIA) that if positive or equivocal is reflexed to Western immunoblotting as the second tier. bioMérieux manufactures one of the most commonly used first-tier EIAs in the United States, the combined IgM/IgG Vidas test (LYT). Recently, bioMérieux launched its dissociated first-tier tests, the Vidas Lyme IgM II (LYM) and IgG II (LYG) EIAs, which use purified recombinant test antigens and a different algorithm than STTT. The dissociated LYM/LYG EIAs were evaluated against the combined LYT EIA using samples from 471 well-characterized Lyme patients and controls. Statistical analyses were conducted to assess the performance of these EIAs as first-tier tests and when used in two-tiered algorithms, including a modified two-tiered testing (MTTT) approach where the second-tier test was a C6 EIA. Similar sensitivities and specificities were obtained for the two testing strategies (LYT versus LYM/LYG) when used as first-tier tests (sensitivity, 83 to 85%; specificity, 85 to 88%) with an observed agreement of 80%. Sensitivities of 68 to 69% and 76 to 77% and specificities of 97% and 98 to 99% resulted when the two EIA strategies were followed by Western immunoblotting and when used in an MTTT, respectively. The MTTT approach resulted in significantly higher sensitivities than did STTT. Overall, the LYM/LYG EIAs performed equivalently to the LYT EIA in test-to-test comparisons or as first-tier assays in STTT or MTTT with few exceptions. PMID:28330884
PARIS II: DESIGNING GREENER SOLVENTS
PARIS II (the program for assisting the replacement of industrial solvents, version II), developed at the USEPA, is a unique software tool that can be used for customizing the design of replacement solvents and for the formulation of new solvents. This program helps users avoid ...
Correlation signatures of wet soils and snows. [algorithm development and computer programming
NASA Technical Reports Server (NTRS)
Phillips, M. R.
1972-01-01
Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.
NASA Technical Reports Server (NTRS)
Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter
2002-01-01
The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.